Playing with Yeoman and Grunt

I’m a very old-fashioned developer and I’ve been staying away from grunt, gulp, yeoman and other JavaScript tools. As a sysadmin, the entire Node.JS ecosystem, frankly, terrifies me. When the recommended install method for your language is…

I’m a very old-fashioned developer and I’ve been staying away from grunt, gulp, yeoman and other JavaScript tools. As a sysadmin, the entire Node.JS ecosystem, frankly, terrifies me. When the recommended install method for your language is to download a shell script from the internet and run it via curl, well, it isn’t going to sit well with me.

I know I can extract what needs to happen from that script and run it myself, that’s what I actually do. However, that isn’t really part of the documentation. This leads to a larger debate of languages that move fast and how to handle package management tools that move fast as well. I don’t have a solution and I don’t want to get into that argument in any case.

This week, I had some free time and I played around with Yeoman and Grunt. The goal was to see how the developer workflow would look like. I’ve been wanting to write some better tooling for my Jekyll blog and never got around to it. Right now I use Fabric for all my automation and that’s about it. I’m looking to modernize it with bower and combine and minify the CSS and JS. But, first I wanted to try it out.

Merely out of curiosity, I built my seizure tracker. It’s just a web app (that currently uses client-side js) that shows days and hours since my last seizure. It can definitely be better, but the point wasn’t to build it but to test the workflow.

Installing things in node.js is painful. A lot of apps want to be installed globally. I refuse to install them with sudo, so I end up setting a custom prefix that’ll install them to a folder in my home directory. And to top it off, npm install gives me very little insight into what it’s actually doing while consuming a reasonable amount of CPU and RAM.

I like bower. It makes it easy to install and update a lot of the dependencies and generate the bower.json based on what I’ve installed. I tested a few countdown libraries and ended up not using a library at the end, because it was side tracking me from my goal of playing with the tools. Since I used generator-webapp, it setup a few grunt tasks for me. grunt serve is gorgeous. I love how it refreshes the page on the browser when I save the page. I finally understand what all the hipsters have been raving about! The internet tells me though that I should try gulp instead.

Quick look through the ecosystem, I like what I see. In the next few days I will probably be working with Gulp and bower to update the tooling around my blog so I can serve it even faster than right now!

Better Problem Definition

I’m a core developer on [CKAN][1] at Open Knowledge, the most widely used data catalog software. Early this year, we released version 2.2 of CKAN with a complete overhaul…

I’m a core developer on CKAN at Open Knowledge, the most widely used data catalog software. Early this year, we released version 2.2 of CKAN with a complete overhaul of the filestore. Amusingly, right after that, we started getting more and more complaints about data loss from the old filestore from on the ckan-dev list. One of the many folks, helped narrow it down to a particular file called persisted_state.json.

This file is created by a library called ofs. Every time a new file is added to the filestore, OFS does the following:

  • Read the persisted_state.json file.
  • Convert the JSON to a Python dict.
  • Add an element to this dict with the metadata of the new file.
  • Convert the dict back to JSON.
  • Write this new JSON to persisted_state.json file.

This causes concurrency problems when things were added to the filestore at high frequency and eventually lead to data loss. Oh joy.

Technically, this wasn’t a bug in CKAN’s codebase. We already solved the core problem at this point by switching to a new filestore which did not use ofs. We couldn’t abandon our users though and I volunteered to find a fix. I read through ofs code and I thought of solving the problem there. After an hour or two of reading up on concurrency and documentation on the python, I still didn’t have a working solution. Eventually, I asked myself what I was looking to solve.

My original problem: “OFS is not thread-safe, causing data loss”. I then realized, that’s not what I wanted to solve. A better problem to solve was: “OFS is not thread-safe, causing data loss. Our users need their data.”. So, I wrote a script that would re-generate the persisted_state.json file with just enough metadata to start working. It isn’t a complete fix, but it was a productive fix. The script was “dramatically” called ofs-hero.

Lesson Learnt: Defining the problem properly helps you solve it better.

Fossmeet 2014

On January 24, I got an email from the Speakers Team at FOSSMeet asking me if I would be able to propose a talk. Fast forward to Feburary 13th, I was on…

On January 24, I got an email from the Speakers Team at FOSSMeet asking me if I would be able to propose a talk. Fast forward to February 13th, I was on a bus heading to Kozhikode. It’s been quite a while since I’ve talked a conference and this would be the first I’d talk about open data. Despite being a Malayali, I’d actually not met a lot of members of the FOSS community from Kerala. As usual, I enjoyed meeting and talking to people about their work and what they do (yes, I’m still an introvert :D).

On the first day, I attended Praveen’s talk (fine, not talk, a discussion) about privacy. Rather fierce arguments broke about privacy, specifically whether the government should invade it to save lives. I’m fairly certain it got a lot of people thinking about privacy. It’s hard to think about things like privacy unless you can contextualize it for yourself and that’s exactly what happened.

After lunch, I sat in the session about Anoop’s workshop about contributing to Open Source. It was meant to give people an idea of the tools you should know. I only sat in it for an hour or so and they were learning git at that time. A while later, I stepped out, primarily because I was starting to get sleepy.

I got back to the main auditorium, just in time to learn that a student passed away on the campus, in the grounds we could see from the auditorium. A wall fell down and he was stuck under it. All of attendees were asked to be in one of the lecture halls while the organizers talked to the faculty and figure out what next. The organizers decided to cancel all the entertainment activities that were planned and the hack night. The remaining sessions were held as an informal discussion rather than actual talks. Later that night, the events of next day were also canceled.

My talk had a few people and we had good conversation about open data and thanks to Nirbheek, we had people glance at The Ballot. I couldn’t give the talk I planned, but I’m grateful for the discussions we had. Later that day, the students were leading a protest in front of the director’s house and the rest of the event was formally canceled.

FOSSMeet seems to be a wonderful place to get more students aware of free and open source and kickstarting contributions. The organizers had done a good job, but were just unlucky with the turn of events. Now that I’ve attended FOSSMeet once, I’m planning on attending the next editions for sure.

Quick Tip: Ansible Debugging

Today I learned something about Ansible debugging from benno on #ansible. Occasionally, commands can get stuck, especially if it’s waiting for input. You can’t fix this until you recognize what’s going on and see the prompt. In other words, you want to see the stdout and stderr on the target machine. Here’s what you do:

  • Run ansible with -vvv.
  • Login to the remote host where the command is being executed.
  • Find the ansible process executing the command and kill them.
  • The stdout and stderr should be printed to the console where ansible was running.

UDS-M Day 5

Phew, finally I get down to writing day 5 overview, a few days after UDS. Generally, I write the previous day’s blog post on the next day. After day 5 though, I had to get work (yeah, on a Saturday). On Friday, I decided to tackle my power trouble by going outside for the hours that I know in advance I won’t have power. Overall, good idea, but they decided to cut power at different times. Sigh.

First thing in the morning was a call with Daniel Holbach to discuss about the Cleansweep Project. Skype kinda gave us trouble and we ended up using Facebook chat in the end to discuss stuff.

Community Roundtable

A round up in the morning of all the community stuff including what we have to go ahead. My memory is faint about what we talked, but I vaguely remember everyone summing up the week and the progress that was made. Also, someone was playing music from Benjamin’s laptop, which included the Titanic song. Fun times 😉

Ubuntu Women Session

A session I didn’t want to miss. This session was very goal oriented from all the other sessions. I liked the mentorship discussion and revival of the whole thing. I’ll probably sign up to be a mentor. I’ve already helped a few friends that I know through UW in other teams like Bug Squad. The idea was not to replace the other mentorship options but to work with the others and to give a list of folks on the UW wiki who can be contacted for particular stuff.

I decided to take a break from the nest session to plan for Operation Cleansweep, a project that I have volunteered to coordinate. I put up wiki pages and came to the realization that we needed more time to get things together. I’d rather have a proper start with documentation everything ready rather than having to wait. I pinged Daniel and we decided to postpone start date to May 24th, 2010.

Lightening Talks

As usual James Tantum rocked us with pictures of slides since most of it were using slides. I forgot a lot of them, but ones that rocked including one by Jonathan from Launchpad team about ‘How to be an evil overlord’ or something to that extent, Popey’s Momubuntu talk, James Westby’s talk about launchpadlib (and yes, try try try until you succeed), a talk from Google Chrome guys about how speed matters, Chris Johnston talked about Classbot, Alan Bell about etherpad (we overloaded the pad 😉 ), and more that I’ve forgotten. I’ll wait for the videos.

Travis Hartwell talked about how he wanted a way to pull the source for all the dependencies of a package with one command instead of typing out many different commands. I was pretty sure sed or awk could do something coupled with apt-cache. My sed foo is pretty low and I asked my good friend Mackenzie Morgan wrote something up for this. Travis, this one’s for you buddy

apt-get source $ (apt-cache depends gwibber | awk '/Depends/{ print $2  }') 

That command would get you all of Gwibber’s dependencies. You can change that package name to get the source of dependencies for any package. This source will be downloaded into the current folder when you’re running it from a terminal. Perhaps someone could make the whole thing more prettier, but hey, this is a start 🙂 Thanks again maco!

Advocate the use of daily builds

One of the projects that Daniel Holbach has been assigned for this cycle. Its been given a high importance and I realize the reason. A daily build means every time you write new code, it will be built for you and a whole lot of folks can test it for you and give you bug reports. Various improvements to LP were discussed including a rollback option among the others.

Ubuntu News Team

Amber is the chief editor of the Ubuntu Weekly Newsletter, so I attended this one hoping it would be interesting and it was! A lot of discussion about unifying teams, etc. There was a thought of doing away with Fridge which I stopped right away. Reminding you folks again, We WANT the Fridge! Well, it wasn’t a serious consideration but a thought someone had. All in all, they made some tough calls, which will happen internally. Also, Fridge is going to be in WordPress soon, so that should help make a lot of things easier. I don’t remember who, I think Joey, will be working with the Design Team for a new theme, etc for the Fridge.

Closing Session

Finally, the UDS comes to a close. Everyone had great fun for a week and did lots of work. Most people were tired and close to burn out (yeah, from all the staying up late in the bar or out partying 😉 ). Seriously, it was tiring. Even from remote, I was burned out. Last 2 days I’ve been so tired. Hopefully I can recharge this week. All the track leads summed up their tracks. Important stuff include Robbie confirming that 10.10.10 could be a release date, pending TB approval. He was talking about how much time each cycle has had and it seemed okay. Jaunty cycle only had 25 weeks, so for 10.10.10, we’ll have only 23 weeks and it seems possible. Scott, talked about btrfs and how it may be the default option for Maverick. Keyword there being ‘may’. Scott blogged about what needs to happen for that. Leann summed up the kernel track decisions. I didn’t understand much of it, so skipping that. Design track, Desktop track, and cloud track also had a small summary which I don’t particular recall. This why I should perhaps write blog posts then and there. Oh yeah, now I remember one decision from desktop, Chromium will be the default browser for the netbook edition. Finally Jono summed up the community track. A huge list of summing up. Most of which I think I’ve already written in the previous posts. He announced Project Cleansweep. Well, he announced it as Project Babu and how it was renamed to Project Cleansweep. Well, I wonder why I even bothered to oppose if he was going to call it Project Cleansweep a.k.a. Project Babu 😀

The final quote from Jono ‘Lets get seriously drunk people.’ He did say he was kidding, but the tone he said it in, was awesome. Marianna arranged for a treasure hunt and she was given a small token of appreciation from the community for all the hard work she did over the week. Finally UDS is over!

Now, time to get to work.