Syncing zsh named directories

One of my most used features of zsh is named directories. I keep my dotfiles synced across multiple machines via Dropbox (referral link) and a Makefile that creates symlinks automagically.

nn

My named directories start with ~ for my home directory so they need to exist for expansion to work properly and not output errors when I log in. When I first implemented syncing I added this check with a very simple test:

nnnnnnn

1n
[[ -d ~/Dropbox/wiki           ]] && wiki=~/Dropbox/wikin

nnn

One line like this isn’t too bad but when you have 30, it gets a little silly looking.

nn

I finally took the time to sit down and read the docs on zsh associative arrays and whipped up a much better solution.

nn

1n2n3n4n5n6n7n8n9n10n11n12n13n14n15n16n17n18n19n
typeset -A NAMED_DIRSnnNAMED_DIRS=(n    bin      ~/Dropbox/Documents/binn    docs     ~/Dropbox/Documentsn    lbin     ~/Documents/binn    wiki     ~/Dropbox/wikin    dot      ~/Dropbox/Documents/dotfilesn    octo     ~/Dropbox/Documents/octopressn)nnfor key in ${(k)NAMED_DIRS}ndon    if [[ -d ${NAMED_DIRS[$key]} ]]; thenn        export $key=${NAMED_DIRS[$key]}n    elsen        unset "NAMED_DIRS[$key]"n    findonen

nnn

The unset isn’t necessary here for the base functionality. I had another problem. With 30 or 40 named dirs, some belonging to projects that I might not have looked at in 6 months and suddenly need to come back to, it’s hard to remember the name I gave for a directory. I added an lsdirs function that lists just the named directories that existing on the machine I’m on.

nn

1n2n3n4n5n6n
function lsdirs () {n    for key in ${(k)NAMED_DIRS}n    don        printf "{7e898290d39a0d5231795fca98f9d583658ec52d7dd28242e01977702bb4ac4e}-10s {7e898290d39a0d5231795fca98f9d583658ec52d7dd28242e01977702bb4ac4e}sn" $key  ${NAMED_DIRS[$key]}n    donen}n

nnn

This gives me nice easy to read output with both the names and the full path.

nn

This blog post is more future reference for myself but perhaps it may be useful to you.

nn

For those interested, here is the Makefile I use:

nn

(Makefile) download
n

1n2n3n4n5n6n7n8n9n10n11n12n13n14n15n16n17n18n19n20n21n22n23n24n25n26n27n28n29n30n31n32n
DOTFILES ?= $(shell pwd)nHOSTNAME ?= $(shell hostname)nnall: shell code perl web mailnnshell:n  ln -fs $(DOTFILES)/zsh/zshrc        ${HOME}/.zshrcn  ln -fns $(DOTFILES)/zsh/zsh.d       ${HOME}/.zsh.dn  ln -fs $(DOTFILES)/gnupg            ${HOME}/.gnupgn  ln -fs $(DOTFILES)/ssh/config       ${HOME}/.ssh/confignn    ifeq ($(HOSTNAME),orion)n      ln -fs $(DOTFILES)/screen/screenrc_orion ${HOME}/.screenrcn    elsen      ln -fs $(DOTFILES)/screen/screenrc  ${HOME}/.screenrcn    endifnncode:n  ln -fs $(DOTFILES)/gitconfig        ${HOME}/.gitconfign  ln -fs $(DOTFILES)/vimrc            ${HOME}/.vimrcn  ln -fns $(DOTFILES)/vim             ${HOME}/.vimn  ln -fs $(DOTFILES)/.tm_properties  ${HOME}/.tm_propertiesnnperl:n  ln -fs $(DOTFILES)/perltidyrc       ${HOME}/.perltidyrcn  ln -fs $(DOTFILES)/perlcritic       ${HOME}/.perlcriticrcnnweb:n  ln -fs $(DOTFILES)/vimperatorrc   ${HOME}/.vimperatorrcnnmail:n  ln -fs $(DOTFILES)/muttrc           ${HOME}/.muttrcn

nn

DHS NOC MNC Poor Security Examples

nThe Media Monitoring Capability of the The Department of Homeland Security’s National Operations Center is tasked with keeping existing situation summaries for both domestic and international events up to date with open source media information. Their second mission is “to constantly monitor all available open source information with the goal of expeditiously alerting the NOC Watch Team and other key Department personnel of emergent situations”. This means monitoring both various online news sites as well as social networking.

nn

Through a Freedom of Information Act request their Analyst’s Desktop Binder (4.4 MB PDF) was made publicly available. This is essentially the employee manual for MMC analysts, describing how to preform their job. There are quite a few very interesting tid bits in it such as the list of keywords and search terms used when searching social media sites.

nnnnn

Section 2.6 beginning on page 14 lists credible sources broken out into various tiers. Local affiliates of the major networks are considered first tier sources and don’t require any additional corroboration prior to release. Meanwhile, all blogs “even if they are of a serious, political nature,” are considered third tier sources and must be corroborated by a first tier source.

nn

Also of interest were screen shots of various interfaces they use such as the Item Of Interest (IOI) report screen used to log an article and useful metadata.

nn

nn

The real problem with this document is the peak inside Homeland Security at how they do things.

nn

Near the end, in section 8 ‘Usernames, Passwords & Contact Information,’ we see that a common username and password are shared amongst analysts to log in to computers, shared drives, DHS email, and more.

nn

nn

I am interested in the twitter information in this section. @DHSNOCMMC1 is a protected twitter account with no tweets and following 339 users. I’d love to see the followers list but I’m sure it is filled with just a lot of boring traditional news organizations.

nn

Jumping back a bit, to section 7, we find something even worse. This section is called “HSIN REDACTED Connection Instructions.” Here we see that REDACTED is a system that is so vital to our security that even it’s name must not be known.

nn

nn

See?!? It’s obvious even to my seven year old that this is a system so vital to keep secret that even the name must be secret.

nn

If one continues reading this section you come across this very surprising passage:

nn

nn

For those using screen readers due to blindness or those not trusting their eyes, this is the sentence in that image:

nn

After clicking on next, the following screen will appear and you must select “Accept for all sessions”

nn

This is followed by an image of an invalid SSL certificate warning dialog. All along experts have worried about teaching everyone’s mother not to blindly accept invalid certificate warnings. If Homeland Security is instructing their analysts to blindly accept invalid SSL certificate warnings then not only have we lost the war, the enemy is dancing on our grave as the baby jesus cries dark tears in the corner.

n

Thermostat RRD display with Javascript

In the last post I described my new Filtrete 3M-50 thermostat and my data collection using a script from a wiki. I wanted an easy way to get values for datapoints on the graphs and have been looking for a use for javascriptRRD since discovering it a year or more ago.

nn

I ended up having to modify the library slightly to get it to pass the yaxes option on to flot. My source is over on github.

nn

nn

You can see the real time data over here.

n

Tweeting Thermostat

nn

I recently replaced the thermostat in our new house with a Filtrete 3M-50. This thermostat has a wifi module and is available at your local Home Depot for $99. I’d been aware of it for a while but had planned on purchasing the $250 nest. The turning point for me came one even as I was going to bed. I was hot, the heat was running and I chose to turn the fan on instead of going downstairs and adjusting the thermostat. In my defense, I knew the thermostat would switch to night mode in another 45 minutes and reduce the set point but I was still wasting energy.

nn

Comparing the Filtrete to the nest the main features you are missing out on are:

nnnnn

    n

  • automatic occupancy detection
  • n

  • humidity detection
  • n

  • automatic schedule learning
  • n

nnn

Automatic scheduling is a big win for the average person but our schedule isn’t static enough for this to be much benifit to us. The location of our thermostat is about the best place for a thermostat, but not for occupancy detection so this feature isn’t missed. I’m a bit sad with the loss of humidity detection, the next model up Filtrete has it for $200 but it’s not a deal breaker.

nn

The real win for geeks with a WIFI thermostat isn’t the iPhone or Android Apps, it’s the ability to interface with it and do geeky things. The Filtrete 3M-50 is a rebadged Radio Thermostat Company of America CT30. Radio Thermostat has the API documentation hidden on their website in the latest news section. Scroll to the bottom of the page and click the ‘I Accept’ checkbox. There is also a 3rd party wiki with API documentation and sample code.

nn

I started with the poller.pl script available on the wiki to go from nothing to something quickly. There’s a lack of error checking in it and the non CPANed Radio::Thermostat module available on that page but it gets you up and running quickly.

nn

nn

After this, I wrote a simple script to tweet daily thermostat information.

nn

(tweetstat.pl) download
n

1n2n3n4n5n6n7n8n9n10n11n12n13n14n15n16n17n18n19n20n21n22n23n24n25n26n27n28n29n30n31n32n33n34n35n36n37n38n39n40n41n42n43n44n45n46n47n48n49n50n51n52n53n54n55n56n57n58n59n60n61n62n63n64n65n66n67n68n69n70n71n72n73n74n75n76n77n78n79n80n81n82n83n84n85n86n87n88n89n90n91n92n93n94n95n96n97n98n99n100n101n
#!/usr/bin/perl nnuse 5.010;nuse strict;nuse warnings;nnuse RRDs;nuse DateTime;nuse Net::Twitter;nuse Mojo::UserAgent;nuse List::Util 'sum';nnmy $runtime  = get_runtime_string();nmy $averages = get_averages_string();nndie unless $runtime && $averages;nnmy $tweet = "Yesterday $averages with $runtime.";nsay $tweet;nnmy $consumer_key    = '';nmy $consumer_secret = '';nmy $token           = '';nmy $token_secret    = '';nnmy $thermostat_url  = 'http://10.8.8.98/tstat/datalog';nmy $rrd_path        = '/home/michael/srv/therm/poller/data/temperature.rrd';nnmy $nt = Net::Twitter->new(n    traits   => [qw/OAuth API::REST/],n    consumer_key        => $consumer_key,n    consumer_secret     => $consumer_secret,n    access_token        => $token,n    access_token_secret => $token_secret,n);nn$nt->update($tweet);nnnsub get_runtime_string {n    my $ua = Mojo::UserAgent->new;n    my $res = $ua->get($thermostat_url)->res;nn    die "Didn't get runtimes for yesterday" unless $res;nn    my $data = $res->json;n    return build_runtime_string('heat', $data->{ yesterday } )n        || build_runtime_string('cool', $data->{ yesterday } )n        || "no heat or A/C used";n}nnsub build_runtime_string {n    my ($mode, $data)  = @_;n    my $times = $data->{ $mode . '_runtime' };n    return unless $times->{ hour } || $times->{ minute };n    return sprintf '{7e898290d39a0d5231795fca98f9d583658ec52d7dd28242e01977702bb4ac4e}sing runtime of {7e898290d39a0d5231795fca98f9d583658ec52d7dd28242e01977702bb4ac4e}i:{7e898290d39a0d5231795fca98f9d583658ec52d7dd28242e01977702bb4ac4e}02i',n        $mode, $times->{ hour }, $times->{ minute };n}nnsub get_averages_string {n    my $yesterday = DateTime->now()->subtract( days => 1);nn    my $start_of_day = DateTime->new(n        year    => $yesterday->year,n        month   => $yesterday->month,n        day     => $yesterday->day,n        hour   => 0,n        minute => 0,n        second => 0,n        time_zone => 'America/New_York'n    );nn    my $end_of_day = $start_of_day->clone->add( hours => 24 );nn    my ($start,$step,$names,$data) = RRDs::fetch (n        $rrd_path,n        'AVERAGE',n        '-s ' . $start_of_day->epoch,n        '-e ' . $end_of_day->epochn    );nn    if (my $error = RRDs::error) {n        die "Error reading RRD data: $error";n    }nn    my (@ext_temp, @int_temp);n    for my $row (@$data) {n        next unless grep { $_ } @$row;n        my ($outside, $inside) = @$row[2, 5];n        push @ext_temp, $outside;n        push @int_temp, $inside;n    }nn   return  sprintf "averaged {7e898290d39a0d5231795fca98f9d583658ec52d7dd28242e01977702bb4ac4e}.1fF outside and {7e898290d39a0d5231795fca98f9d583658ec52d7dd28242e01977702bb4ac4e}.1fF inside",n        average( @ext_temp), average(@int_temp);n}nnsub average {n    my $list = shift;n    return sum(@$list) / @$list;n}n

nnn

Tweets look like:

nn

    n

  • Yesterday averaged 59.5F outside and 67.9F inside with no heat or A/C used.
  • n

  • Yesterday averaged 64.2F outside and 70.1F inside with heating runtime of 0:17.
  • n

  • Yesterday averaged 58.9F outside and 69.0F inside with heating runtime of 0:31.
  • n

nnn

You can follow my house on Twitter as @mikegrbs_house.

n

Migrating Post Tags from WordPress to Octopress

I’ve migrated from WordPress to Octopress and used the Jekyll wordpress migrator to move my posts over. Unfortunately, this doesn’t preserve post tags. The output looks like this:

nn

1n2n3n4n5n6n7n8n
---nlayout: postntitle: Devops w/ Perl @ Linode PPW Talk Slidesnwordpress_id: 330nwordpress_url: http://michael.thegrebs.com/?p=330ndate: 2011-10-27 14:46:31 -04:00n---nEarlier this month I gave a talk about...n

nnn

Having the wordpress post id means extracting the post tags from the db should be quite easy. First we define our desired output:

nn

1n2n3n4n5n6n7n8n9n
---nlayout: postntitle: Devops w/ Perl @ Linode PPW Talk Slidesnwordpress_id: 330nwordpress_url: http://michael.thegrebs.com/?p=330ndate: 2011-10-27 14:46:31 -04:00ntags: geek perl slidesn---nEarlier this month I gave a talk about ...n

nnn

The tags field really can appear anywhere in this YAML fragment but I chose to throw it at the end. With 103 posts to loop over, run a query and insert a new line a short script makes sense. The real win for our script though is using the Tie::File module which presents each file as an array with an element for each line.

nnnnn

From there it’s simply a matter of reverse engineering the WordPress schema to come up with a query that will return a space delimited list of tags for a given post id.

nn

1n2n3n4n5n
SELECT GROUP_CONCAT(t.`name` SEPARATOR " ") FROM `wp_term_relationships` rnINNER JOIN `wp_term_taxonomy` tax ON r.`term_taxonomy_id` = tax.`term_taxonomy_id`nINNER JOIN `wp_terms` t ON tax.`term_id` = t.`term_id`nWHERE r.`object_id` = ?nAND tax.`taxonomy` = "post_tag";n

nnn

Throw that query in a script that iterates over the list of files and uses Tie::File to add a line to the post and we get:

nn

(word2octo-tags.pl) download
n

1n2n3n4n5n6n7n8n9n10n11n12n13n14n15n16n17n18n19n20n21n22n23n24n25n26n27n28n29n30n31n32n33n34n35n36n37n38n39n40n41n42n43n44n45n46n47n48n49n
#!/usr/bin/perl nnuse 5.010;nuse strict;nuse warnings;nnuse DBI;nuse Tie::File;nnmy $path_to_posts = 'source/_posts';nnmy $dbh = DBI->connect('DBI:mysql:db_name:db_host','db_user','db_pass');nnmy $get_tags = $dbh->prepare(q{n    SELECT GROUP_CONCAT(t.`name` SEPARATOR " ") FROM `wp_term_relationships` rn    INNER JOIN `wp_term_taxonomy` tax ON r.`term_taxonomy_id` = tax.`term_taxonomy_id`n    INNER JOIN `wp_terms` t ON tax.`term_id` = t.`term_id`n    WHERE r.`object_id` = ?n    AND tax.`taxonomy` = "post_tag";n    });nnnfor my $file (glob "$path_to_posts/*.markdown") {n    say $file;n    add_tags_to_file($file);n}nnsub add_tags_to_file {n    my $filename = shift;n    tie my @file, 'Tie::File', $filename or die "No tie: $!";n    my $tags;n    while (my ($rec, $data) = each @file) {n        if ($data =~ m/^wordpress_id: (d+)$/) {n            $tags = get_tag($1);n        }n        if( $data =~ /^date:/) {n            splice @file, $rec + 1, 0, $tags if $tags;n            return;n        }nn    }n}nnsub get_tag {n    my $post = shift;n    $get_tags->execute($post);n    my ($tags) = $get_tags->fetchrow_array();n    return "tags: " . $tags if $tags;n}n

nn

Local Growl Notifications from Remote Irssi

What I’ve tried, What I want

nn

Skip to the next section if you just care about the end result. In the past, I’ve had my remote Irssi client send UDP Growl packets to my static home IP that were then forwarded to the broadcast address on the local network. This worked great and allowed me to get Growl notifications on which ever computer I happened to be sitting at. These days, I’m always on my laptop, even when sitting at another computer (Teleport for the mega win).

nn

I wanted a solution that would allow me to receive Growl notifications without having to forward any ports and no matter what the public IP for my laptop may be. I use App::PersistentSSH to maintain a persistent SSH control master connection to my Linode so the ideal solution uses a reverse tunnel over this connection to get the notifications too my laptop.

nn

My Current Solution

nn

    n

  • Added --ssh_opts -R 127.0.0.1:22992:127.0.0.1:22 to my App::PersistentSSH command line. This connects localhost:22992 on my Linode to port 22 on my laptop.
  • n

  • local-growl.pl — an Irssi script that uses growlnotify to send growl notifications
  • n

  • growlnotify.pl — an ssh proxy for growlnotify, passes it’s arguments to growlnotify on a remote system via ssh
  • n

nnn

Simple push growl notifications from my Linode -> my MacBook Air whenever it is online. The scripts and an irssi.png to use in the notifications are available on GitHub.

nn

Configuration for local-growl.pl is via Irssi’s in built settings, type /set growl to see a list of possibilites:

nn

1n2n3n4n5n6n7n
[local-growl]ngrowl_enabled = ONngrowl_growlnotifypath = /home/michael/bin/growlnotifyngrowl_image_path = /Users/mgreb/Documents/irssi.pngngrowl_show_hilights = ONngrowl_show_private_messages = ONngrowl_sticky = OFF

nnn

Configuration for growlnotify.pl is via a set of config variables near the top:

nn

1n2n3n4n
my $ssh_host    = 'localhost';nmy $ssh_port    = 22992;nmy $ssh_user    = 'mgreb';nmy $growlnotify = '/usr/local/bin/growlnotify';n

nnn

Hopefully you will find this useful.

n

Whole House Energy Monitoring

I placed an order for a TED 5000 on Februaryn10th. The TED 5000 is a whole house energy monitor – a kill-a-watt for the whole house. I’ve spent a lotnof time reading about the products available in this space. I’ve looked at about 10 products for whole housenenergy monitoring and three jump to the top. The wattvision, PowerHouse Dynamics’s eMonitor, and the TED 5000.

nn

nnnnn

The wattvision (~$250) most closelynresembles the TED in terms of price and features. The wattvision uses an optical sensor that counts the rotationnof the disk in the utility meter. This sensor is connected to the device, which is powered by an AC adapter andnsends the data to wattvision’s servers via your home wifi network. This is the easiest to install, but wouldnrequire running the sensor cable from the outside to inside. Additionally, all data is sent to wattvision’snservers. I’d also need to subscribe to wattvision’s $8.99/monthnservice for the other features I want (like API access.)

nn

nn

The eMonitor (starting around $688 for 12 circuits) is something I just recently discovered. Really, this is my ideal solution: for $1,277, you get the necessary equipment for monitoring the power feed plus 44 individual breakers. This gives a much better view of where your power consumption is going. Per-circuit monitoring also enables nifty alerts – an SMS for power draw in the kids room after school starts on a week day, or if the compressor on the fridge is running longer than normal (time to clean the coils, etc.) I just can’t justify the expense at $1,277. The price is certainly reasonable for what you are getting, but there is no way the wife would approve the purchase :/

nn

nn

The Energy Detective (TED, ~$200) from Energy Inc provides full house consumption monitoring like the wattvision. Unlike the wattvision, the data is stored and served from a gateway device on your home network. Two current transformers go around the two lines for the incoming split phase feed. These are connected to a box, which is installed next-to-or-inside the breaker panel. The breaker panel is powered and communicates via a connection to a breaker on each phase. The gateway is installed elsewhere in the house and plugs into an outlet for power and communication with the current sensor, and via Ethernet to serve its precious data up.

nn

So, February 10th I finally got approval from the wife to order the TED and purchased the TED 5000-G with overnight shipping. It arrived as expected on the 11th and I was eager to install it. A coworker (also interested in the TED,) came over for dinner and to help out with the installation. I was too excited to wait for his arrival and finished the install before he got there ;-).

nn

It’s now February 27th and I still don’t have a working TED system. The gateway I was originally shipped has a defective Ethernet port. It’s already been to Energy Inc and back and is still defective. They finally agreed to ship me a replacement gateway on the 25th but sent it with 2nd day shipping, which has the same transit time as ground (just more expensive.) The new unit should arrive March 1st.

nn

nn

I’m quite eager to get things up and running, and anticipate no problems once I have a working gateway. The most common issue people run into is excessive noise or attenuation of the power line communications (PLC) signal between the sensor and gateway. The forums seem to be full of people with lots of grief around this issue. The gateway does contain a ZigBee transceiver but that is for communication with the optional wireless display. Some have questioned switching from PLC to ZigBee for the current data but a device inside the metal breaker box would have a hard time getting any sort of RF signal out, and PLC seems the most logical solution (even with it’s inherent issues.)  Of course, this is easy for me to say since I don’t anticipate having any issue with PLC. I can see a clear ~3-4Vpk signal on the zero crossing of the A/C at the outlet I plan to install the gateway:

nn

AC with TED MCU transmissions

nn

I have another blog post in the works documenting my seemingly unending issues with the other half of the TED solution: the support staff and engineers backing the product. To be fair, the support staff is 1:2 – the second guy I spoke to there was great, but I’m pretty sure their engineers shouldn’t be allowed to know customers even exist, let alone talk to them on the phone. But, this is a story for my next blog post.

n

Bacon Salad & Zinik Owes Me

First off, Zinik owes me bacon flavored donuts, the Internet says it, it must be true.

nn

I composed my first recipe the other night and figured I’d share it with you in the hopes that your doctor might hate me as much as mine does.

nn

Bacon Salad

nn

Ingredients

nn

    n

  • 1lb Pancetta
  • n

  • 1lb Virginia Bacon
  • n

  • 1lb Peppered Bacon
  • n

  • 1lb Canadian Bacon
  • n

  • 1lb Apple-Wood Smoked Bacon
  • n

  • 1lb Irish Bacon
  • n

  • 1lb Slab Bacon
  • n

  • 2lb Hog Jowl
  • n

  • 1 Bottle reduced fat Bacon Ranch Salad Dressing
  • n

nnn

Directions

nn

    n

  1. Fry each, crispy.
  2. n

  3. Cut into 1×2 inch rectangles.
  4. n

  5. Toss in bowl with salad dressing.
  6. n

nnn

Makes 2, snack sized, servings.

n

Task Management with Hiveminder and Perl

This is how I manage my tasks with Hiveminder on a weekly basis and the Perl script that helps me do it. I don’t really expect the Perl script to be useful to anyone as-is but portions of it may be useful to others, as well as the general work flow, so I’ve decided to share them.

nnnnn

At Linode we have a wiki page where we list 5 or more tasks we wish to accomplish during the week. There is a heading for each employee and below the heading we place our tasks. Throughout the week we can add additional tasks or mark existing ones as done.

nnnnn

I’ve used Hiveminder for some time. When we started the weekly task lists at Linode I found that taking a few minutes to figure out which tasks I wish to complete in the coming week works quite well for me. I started marking these tasks with the ‘week’ tag in Hiveminder. I quickly ended up writing a perl script, week.pl, to help me manage hem.

nnnnnnnnnn

My Weekly Workflow

nnnnn

First thing Monday morning I run:

nnn

$ week.pl reportn

nn

This prints a report with two sections. The first section lists tasks that currently have the ‘week’ tag with a line through the task ID if it is completed. This gives me a nice summary of what I planned on accomplishing the previous week and how I did. The second section lists all of my tasks currently visible in Hiveminder. I hide tasks that I know I’m not going to work on in the next few weeks so this list is usually no more than 20 or 30 items.

nnn

Sample report Image

nn

I take this report into the Monday morning meeting with me. During the meeting, I’ll glance over this list and select items for the upcoming week. I also use this page to take notes on during the meeting, writing down any new tasks that come up in the meeting.

nnnnn

After the meeting, I add any new tasks generated in the meeting that I won’t be working on this week to Hiveminder.

nnn

$ todo.pl braindumpn

nn

`todo.pl` comes from App::Todo, a command line Hiveminder interface. The braindump command launches $EDITOR where I add new tasks, one per line. The braindump syntax allows for specifying tags, setting priorities, and other things as well.

nnnnn

Next, I prepare the task list for the upcoming week:

nnn

$ week.pl editnCarry over the following tasks?nbring about world peace (y or n) [default y] ynwrite some awesome prendo some other cool stuffncreate practical cold fusion (y or n) [default y] nnreturn library bookn

nn

The `edit` command iterates over each task tagged with ‘week’ if the task is not marked completed. It prompts whether or not I wish to carry the task over to this week (leave the tag). Any tasks marked completed have the tag removed automatically.

nnn

$ week.pl addnCreated:n        #YRVK write an awesome report for Tom [week dev]n        #YRVL test some new stuff for deployment [week admin]n

nn

The `add

nnn

$ week.pl update https://path.to.trac/wiki/Tasks/2010-03-15nSup dawg, I heard you like tasks so I did ur shit for youn

nn

The `update` command grabs my tasks tagged with ‘week’ and formats them one per line started with ‘ * ‘, a wiki bullet list. It grabs the current wiki page, finds my heading, substitutes the formatted task list under the heading, and submits the change. It also stores the path given in the YAML config file.

nnn

$week.pl gon

nn

This opens the stored URL for this week’s tasks wiki page in my default browser, allowing me to confirm week.pl did what it’s supposed to.

nnnnn

Later in the week once I’ve done something:

nnn

$ week.pl done tomn#YH7T bring about world peace [dev week]n#YRVL test some new stuff for deployment [week admin]n[DONE] #YRVK write an awesome report for Tom [week dev]n

nn

This retrieves the task(s) tagged ‘week’ that are not marked completed containing the given string. If there was only one match, the script marks it as done and then outputs the current state of tasks tagged ‘week’.

nnn

$ week.pl updatenSup dawg, I heard you like tasks so I did ur shit for youn

nn

Same as `update` before, except when no URL is given, the URL is read from the configuration file. This way I only need to worry about the URL once per week, the first time I update for the week.

nnnnn

Conclusion

nnnnn

So there you have it. If you are also using Hiveminder, maybe aspects of my work flow will make sense for you and pieces of the Perl script may be useful. If you aren’t using Hiveminder, maybe you will be inspired to check it out. I use michael@thegrebs.com on Hiveminder in case you feel the need to assign me a task or wish to gift me another year of Hiveminder Pro 😉

nnn

week.pl

n