Beware the Plan of Sauron

The “Master App” can’t magically make it all work.
The all seeing Eye of Sauron

On the project VRM blog, Doc Searls recently suggested that the killer app for VRM is the “Master App”. In response, on the Project VRM email list, Jim Pasquale suggested it’s more of a mixing board than a master app. Jim’s right.

The “master app” reminds me of what I call “The Sauron business model,” a term I coined after watching over one hundred and twenty 60 second pitches at two different Startup Weekend Santa Barbara events in the last two years. With all of those pitches in rapid succession, the pattern popped right out.

For those of you who might not be Lord of the Rings fans, Sauron was the bad guy hell-bent on unifying everything in Middle Earth under his brutal rule, and wanted that hobbit’s ring to do it:

Three Rings for the Elven-kings under the sky,
Seven for the Dwarf-lords in their halls of stone,
Nine for Mortal Men doomed to die,
One for the Dark Lord on his dark throne
In the Land of Mordor where the Shadows lie.
One Ring to rule them all, One Ring to find them,
One Ring to bring them all and in the darkness bind them
In the Land of Mordor where the Shadows lie.

[quoted from http://en.wikipedia.org/wiki/One_Ring]

What I saw repeated again and again and again in those pitches at Startup Weekend were hopeful entrepreneurs who earnestly believed that if they could just unify all of a person’s [insert unique idea here], they could provide a ground breaking new service that would transform the world. Just like Sauron, all they needed was that One Thing to make it all work…

  • Creating the only singular view of an individuals financials, MyFinances.com will make it easy for people to instantly waste less–and make more–money!
  • Bringing together all of a user’s vendors into one coherent dashboard, people will finally be able to make smart choices that both save money and create more value, while vendors get better access to the right customers at the right time!
  • Giving users control of their identity across all online services will make their online lives simpler and more secure.

Sound familiar?

The problem with the Sauron business model is that it depends on first unifying All the Things before it generates any unique value.

Unless you can provide value FIRST, you’ll never get a chance to unify all the things. Trying to convince or coerce users into doing so makes you look a little like Sauron: delusional, power hungry, and more value destroying than value-creating.

What Doc wants sounds great, but starting with a dependency on unification is the wrong framing of the opportunity.

As I see it, there are two ways forward for the ambitious market changer: sharpshooting your way into a revolution or teaching a gorilla to dance.

crossing the chasm by Geoffrey Mooreries and trout positioningFor most entrepreneurs, with limited ammunition and time, finding a way to make every shot counts isn’t just important, it’s vital. Find a niche, nail it. That mantra isn’t new, both Geoffrey Moore and Ries & Trout built business strategy movements on the idea. Focus is everything to the early startup. Do that and you might just be able to become a unifying tool for end-users… you just won’t start out as one.

firefox browser On the other hand, if you’re a player in a big company, with an already ubiquitous presence, then perhaps the opportunity is to make your over-sized gorilla dance like Fred Astaire. Bill Gates orchestrated the myth of Microsoft turning on a dime to take on the Internet. Steve Jobs created entire new categories of devices when he returned to Apple after a forced hiatus. Unfortunately, while most of us don’t have Steve Jobs or Bill Gates levels of genius, even fewer of us are in a position to change existing players as they did. Fighting for VRM, we are rooting for Sean Bohan over at Mozilla, who is fighting the good fight at the organization that makes Firefox, the worlds 3rd most popular web browser. If you are lucky enough to be in a position like Sean’s, go for it. We need visionary change from the top in as many large companies and organizations as we can get. But there are far more hopeful entrepreneurs than change agents positioned at industry giants…

In short, beware of the Sauron plan. If you’re imagining your startup unifying all of anything before you produce unique value for your users and customers… you’re probably doing it wrong.

Posted in ProjectVRM, Uncategorized, Vendor Relationship Management | Tagged , , , , | Comments Off on Beware the Plan of Sauron

How to conditionally display variables with EJS

Short version: <%= user.name ? user.name : '' %>

When using EJS as a template language, it can be a bit of a mystery how to concisely display variables if and only if they are defined.

For example, if you have a form field that is pre-filled with data from the database, you usually don’t want to pre-fill with the string “undefined”. Rather, you’d like those fields left blank. This is a common situation when using Node.js with MongoDB and Mongoose, which can be friendly to missing fields, if only there were an easy way to avoid the “undefined” value when rendering your EJS template.

Typical solutions have something awkward like

<% if(user.name) { %> <%= user.name %> <% } %>

That’s got all those extra %> terms.

You might try something like

<%= if(user.name) { user.name }%>

But that doesn’t like the “if”, because <%= is expecting a value, not a statement. So you might try this:

<% if(user.name) { user.name }%>

Which probably evaluates user.name rather than displaying it. Not only is that an ugly security risk (going straight from user data in your DB to code), it doesn’t actually output anything to the web page.

Fortunately, the ternary operator is magic. It can perform a conditional anywhere a value is needed.

Here’s the ternary for displaying EJS values only when defined:

<%= user.name ? user.name : '' %>

Yes, you need the final ”, but that’s a small price to pay.

Now, whenever you want to drop in a field value if–and only if–it is defined, use this trick.

For those sticklers out there, user.name could be defined but null or an empty string. The “technical” way to check for undefined would give us something like

<%= typeof user.name!='undefined' ? user.name : '' %>

But functionally, both approaches give you what you want for the form field use case. So, I’ll stick with the simpler, shorter one.

Posted in coding, Development | Tagged , , , , | Comments Off on How to conditionally display variables with EJS

Destroying contract law: CISPA violates more than privacy

Don’t let Congress undermine our best free market tool for fixing our relationships with companies.

The US House of Representatives just passed a bill (CISPA, aka HR264) that explicitly allows companies to ignore their privacy agreements in the name of cybersecurity.

Here’s the Huffington Post report:

http://www.huffingtonpost.com/2013/04/18/cispa-vote-house-approves_n_3109504.html

SOPA. The Monsanto Protection Act. CISPA. Regulatory capture of the worst kind.

Please get the word out. Fight this thing.

If we can’t even depend on the blatantly one-sided Terms of Service and Privacy Policies of our service providers, entire fields of solutions evaporate.  Efforts to improve, fix, clarify, negotiate or automate the privacy and service agreements will be essentially worthless if Congress is willing to give corporations a free pass.

“Notwithstanding any other provision of law, a self-protected entity may, for cybersecurity purposes … share such cyber threat information with any other entity, including the Federal Government.”

Enshrining corporate protections like this in law isn’t just a privacy problem. It undermines the very notion of contract as a mechanism for constructing agreements in a free society.

This is unaccepatble.

Fight CISPA. Call your senator. Call the white house. Blog it. Tweet it. Repost this.

Tell everyone.

Posted in Information Sharing, privacy, ProjectVRM, regulatory, Shared Information | 2 Comments

Google sees the value of Free Customers

This is fascinating:
http://www.adweek.com/news/technology/google-bringing-trueview-ads-apps-games-147558

Google has an ad program on YouTube that let’s users skip ads and they are now extending it to other ad formats.

Even though it is the same old advertising game–something that could use some fixing–what’s impressive is that with the ad-skipping feature Google saw “a 40 percent reduction in the number of people who click away from a video when shown a pre-roll” ad.

It’s real-world proof that a free customer is more valuable than a captive one. Give people the freedom to leave and more will stay than if you had forced the issue.

I’ve done this myself. Initially, I was ready to leave the page because the content didn’t seem worth the extra delay of the ad. But then I see that if I wait just a few seconds, I can click past it. Not only does the ability to click past keep me from just abandoning the video altogether, but in a few instances the opening bit was funny or intriguing or just interesting enough for me to want to see the rest of the ad.

It’s a brilliant example of how giving people the freedom to leave can actually keep them around more.

Posted in ProjectVRM, User Driven, Vendor Relationship Management | Comments Off on Google sees the value of Free Customers

Badges for the Standard Label Kickstarter

We’ve been asked if we have any badges to help promote our Kickstarter for the Standard Information Sharing Label.

The answer is now yes!

If you’re a backer or just want to help promote the idea, put these babies on your website or blog or Twitter or Facebook and help get out the word!

We have less than two weeks left to rally support for a radical new way for companies to communicate about what they do with the information we share online.

Once we finish the Kickstarter, the link will go directly to our new website and will help promote the use of the Standard Label for websites everywhere.

Feel free to download the image or just use the URLs as SRC tags. We’ve also provided example HTML that links directly to http://standardlabel.org.

I Support The Standard Label! <a href="http://standardlabel.org"><img src="https://blog.joeandrieu.com/wp-content/uploads/2012/05/standard-label-badge.1.png" alt="I Support The Standard Label!" title="I Support The Standard Label!"/></a>
I Support The Standard Label! <a href="http://standardlabel.org"><img src="https://blog.joeandrieu.com/wp-content/uploads/2012/05/standard-label-badge.2.png" alt="I Support The Standard Label!" title="I Support The Standard Label!"/></a>
I Support The Standard Label! <a href="http://standardlabel.org"><img src="https://blog.joeandrieu.com/wp-content/uploads/2012/05/standard-label-badge.3.png" alt="I Support The Standard Label!" title="I Support The Standard Label!"/></a>
I Support The Standard Label! <a href="http://standardlabel.org"><img src="https://blog.joeandrieu.com/wp-content/uploads/2012/05/standard-label-badge.4.png" alt="I Support The Standard Label!" title="I Support The Standard Label!"/></a>
Posted in Information Sharing, Shared Information | Tagged , , , , | 1 Comment

Rethinking Context

Insights from PII2012

The FTC Privacy Report makes it clear that context is the key to privacy. For example, notice and consent need not be presented and secured if the use is obvious from context: If you buy a book from Amazon, it’s clear they need an address to ship you the book.

But sometimes the context isn’t clear to the average user, even when it is obvious to developers. My Mom believes she doesn’t share anything on Facebook because she mostly just comments on other people’s posts. Ilana Westerman’s work shows the same disconnect: many people just don’t see their privacy exposure because they have simplified models of their actions. They think what they are doing isn’t the risky stuff, but they rarely have the awareness of what they are really doing.

Making that harder are monolithic Terms of Service and Privacy Policies that bury the details arbitrarily far away from the point of exposure, and in confusing legalese.

The answer is some form of bite-size context management. For example, Smart Disclosure, which is the US administration’s language for greater clarity about risks of information sharing:

One of the most powerful versions of smart disclosure is when data on products or services (including pricing algorithms, quality, and features) is combined with personal data (like customer usage history, credit score, health, energy and education data) into “choice engines” (like search engines, interactive maps or mobile applications) that enable consumers to make better decisions in context, at the point of a buying or contractual decision

Or perhaps something along the lines of Personal Levels of Assurance, a term from AT&T describing piece-wise on-demand disclosure and consent.

This is also the approach behind the Standard Information Sharing Label, which let’s you see in simple, consistent terms, exactly what happens with the data you are about to share, before you share it. That instance of sharing defines the context for which the information may be used, and the label makes it easy for individuals to understand that context.

We aren’t compressing the entire Terms of Service and Privacy Policy for a given site, we’re presenting just the essential details about a particular instance of information sharing. Bite-size disclosure, right at the point of sharing, because nobody wants to read 47 pages of legalese.

We think that’s the right model for untangling the world wide web.

Posted in Information Sharing, Shared Information | Tagged , , , , , | Comments Off on Rethinking Context

It all starts with sharing…

From kindergarten through our professional life, sharing binds us together as friends, colleagues, and collaborators, so perhaps it should be no surprise that online sharing through services like Facebook, Twitter, and email shapes our online social life. Yet sharing online is anything but simple.

The details of what happens with the information we share is often hidden behind long, complicated legal agreements that almost no one reads. If we’re lucky, they are explained in Terms of Service and Privacy Policy documents, sometimes buried out of view, other times forced on us like ransom notes forcing us to state our compliance or leave the site.

It doesn’t have to be that way.

Today, at the Internet Identity Workshop, we officially launch the Standard Information Sharing Label, which makes it easy for websites to say in simple, consistent language what they do with our information, making it easier for individuals to make better decisions about the information we share online.

The Information Sharing Work Group has published a draft specification defining the Standard Label as well as a Kickstarter project to finance its graphic design.

The Kickstarter has a brief video explaining the effort. The official press release is here.

The work is free to use and open to collaborators.

In all my years contributing to the VRM conversation, few projects have made me as proud as I am of the work behind the Standard Label.

Check it out. If you like it, please spread the word and consider chipping into help take this work to the next level.

Posted in Information Sharing, ProjectVRM, Shared Information, User Driven, Vendor Relationship Management | Tagged , , , , , , , , , , , , , , , , , , , , | 3 Comments

The World’s Simplest AutoTweeter (in node.js)

Last month, I set up a quick little autotweeter using Node.js to help me with Repeal Day Santa Barbara. I wrote a short blurb about it before hand, here’s what actually shipped.

(Many thanks to the guys at the Santa Barbara Hacker Space for inspiring and contributing to this project.)

The plan was simple.

  1. Set up a free server at Amazon Web Services.
  2. Write a simple daemon that processes a queue of tweets, sending them to the RepealDaySB twitter account at just the right time.
  3. Write a bunch of tweets and schedule them.
  4. Run that daemon on the evening in question (December 5)

AWS

Setting up with Amazon was pretty easy. I created an instance at the “free tier” level with port 22 as the only inbound port, using an ubuntu AMI (ami-6006f309). Installing node.js and emacs was pretty easy, once I connected using Van Dyke’s SecureCRT, which handled the public key authentication like a charm. With that setup, it was pretty straightforward to start coding.  (I did need some other ports to explore some of the examples that turned out to be dead ends, but for the live service, all I needed was SSH access on port 22.)

The First Tweet

The next step was to work through the details to get node.js to send just one tweet to Twitter. A lot of the examples out there offer more functionality than I needed, using frameworks like ExpressJS to support authenticating into Twitter as any user.  But I didn’t need that. In fact, I didn’t want an interactive service. I didn’t really need a database and I didn’t need a real-time interface. I just wanted to tweet as me (well, as my RepealDaySB persona).

Twitter has pretty good support for this single-user use case:  https://dev.twitter.com/docs/auth/oauth/single-user-with-examples  If only they’d had example code for node.js…

The good news is that node-OAuth is the go-to library for OAuth on node.js and after a bit of wrangling it did the trick:

So, the first thing I did was put my secret keys into twitterkeys.js

exports.token = '3XXXXXXXXX89-3CbAPSxXXXXXXXXXXy42A9ddvQkFs96XXXXXXX';
exports.secret = 'HHXXXXXesTKZ4bLllXXXXXXXXXX8zAaU';
exports.consumerKey = "XXXXXXXbgfJRXXXXXXXX";
exports.consumerSecret = "9XXXXXXXXXQJ9U8VuoNMXXXXXXXXX";

Then, I could import that file like this:

var keys = require('./twitterkeys');

And access it like this:

var tweeter = new OAuth(
  "https://api.twitter.com/oauth/request_token",
  "https://api.twitter.com/oauth/access_token",
  keys.consumerKey,
  keys.consumerSecret,
  "1.0",
  null,
  "HMAC-SHA1”
);

I did the same thing with my tweets in tweets.js, since I thought that might be useful:

module.exports =
[ {
  status:"test1",
  timestamp: "2011-11-5"
},{
  status:"test2",
  timestamp: "2011-11-7"
}];

And to access that,

var tweets = require('./tweets.js');

The astute observer will note my brilliant plan to use a timestamp for scheduling. We’ll return to that later.

To figure out what to do with my shiny new OAuth object I looked up Twitter’s API:

POST statuses/update  Updates the authenticating user’s status, also known as tweeting. To upload an image to accompany the tweet, use POST statuses/update_with_media. For each update attempt, the update text is compared with the authenticating user’s recent tweets.Any attempt that would result in duplication will be…

Easy. So the URL to use with OAuth is

https://dev.twitter.com/docs/api/1/post/statuses/update

And here’s the code that actually combines all that into my very first tweet from node.js.

var https = require('https');

var OAuth= require('oauth').OAuth;
var keys = require('./twitterkeys');
var twitterer = new OAuth(
		   "https://api.twitter.com/oauth/request_token",
		   "https://api.twitter.com/oauth/access_token",
		   keys.consumerKey,
		   keys.ConsumerSecret,
		   "1.0",
		   null,
		   "HMAC-SHA1"
		  );

var tweets = require('./tweets.js');

var status = tweets[0].status;

var body = ({'status': status});

  // url, oauth_token, oauth_token_secret, post_body, post_content_type, callback

twitterer.post("http://api.twitter.com/1/statuses/update.json",
	       keys.token, keys.secret, body, "application/json",
	       function (error, data, response2) {
		   if(error){
		       console.log('Error: Something is wrong.\n'+JSON.stringify(error)+'\n');
		       for (i in response2) {
			       out = i + ' : ';
			       try {
				   out+=response2[i];
			       } catch(err) {}
			       out += '/n';
			       console.log(out);
			   }
		   }else{
		       console.log('Twitter status updated.\n');
		       console.log(response2+'\n');
		   }
	       });

Data Store

At first I thought I’d use a database. There are plenty that are easily accessible from node.js and I even signed up for a free service that hosted CouchDB. CouchDB is attractive for node.js work because you can basically store JSON objects directly in the database. But that also got me thinking…

Maybe a database is overkill: too much capability for what I really needed. I don’t need to update the tweets during the evening. I don’t need to support simultaneous access. I don’t need speed or scalability. That’s when I realized I was thinking like a client-side developer–the world I do most of my javascript coding in.  With node.js on the server, I could just read and write to a local file!  Turns out it’s easy.  I should have thought about that earlier, given that I already had been reading my tweets with the @require command, but I hadn’t thought about being able to WRITE to the file to keep track of what had been tweeted.

Here’s how you do it. First, set up the path, using the __dirname variable to access the local directory:

var path = __dirname+"/tweets.js";

Then, to read the file:

var fs = require('fs');
fs.readFile(path,"utf8", function(err,data) {
  if (err) throw err;
    tweets = JSON.parse(data);
});

And to write the file:

fs.writeFile(path,JSON.stringify(tweets,null,4),function(err) {
  if(err) throw err;
    console.log("It's saved");
});

Date & Time

Now, about that timestamp.  I had to represent when I wanted to tweet and having been here before, I knew it could be tricky to make sure the server agrees on the timezone. Javascript has a great Date() object which can parse ISO-8601 formatted dates (e.g., 2011-12-05T10:00-08:00) , so I tried using that. Since the timezone is explicit in ISO-8601, it doesn’t matter what timezone the server is in, as long as the comparison uses a fully qualified timestamp. It took a bit of trial and error because the parser is pretty strict, but eventually, I got it.  However, that raw timestamp isn’t that easy to work with, so I used an old trick that I ported from Excel into Google Docs. Put the data in a spreadsheet for editing, and use columns to format it into the right JSON. Then you can cut & paste the rows into a text editor, delete all the tabs, and get the format you need. Here’s the doc I actually used.

It worked like magic. I got to use spreadsheet math to track the number of characters remaining in the tweet plus to schedule the dates. Things like

=B9+25/24/60

set a time that’s 25 minutes after cell B9, which made scheduling our tweets a breeze. With a bit of wrangling, I was able to get the easy-to-edit date and tweet on the left translated into the proper JSON & ISO-8601 format on the right.

After deleting the tabs, here’s what a resulting line looks like:

{"status":"Good Morning, Santa Barbara!  Happy Repeal Day, everybody! 78 Years ago, we lifted the chains of Prohibition, ratifying the 21st Amendment!","timestamp":"2011-12-05 T 10:00 -08:00"},

Add a bracket or two and clip the extra comma, and you’ve got a nice JSON array suitable for our javascript code.

Google Docs was especially nice because it made collaborating on the tweets super easy. My business partner and I had a great way to review and revise the set of tweets as we got ready for the main event.

Timing

The next trick was figuring out how to run the code so that it hummed politely along and sent the tweets when necessary. Since this was the only important process on my machine, I could’ve ran a never-ending loop constantly checking the time, but that seemed inelegant, and after all, the point was to learn how to use node.js properly. What I wanted was to start the daemon and forget about it, and let it sleep & wake up just when it needs to send a tweet.

So, every once in a while, the daemon wakes up, builds a list of tweets that need to be tweeted (because the current time is after their timestamp), tweets them, and marks them as tweeted. Also, we keep track of the earliest timestamp in the rest of the tweets, so we can sleep the right amount of time.

Here’s how:

var now = new Date();
var next_time;
var tweet_now = new Array();
for(t in tweets) {
   if(tweets[t].tweeted)
     continue;
   time = new Date(tweets[t].timestamp);
   if(time < now ) {
     tweet_now.push(tweets[t].status);
     tweets[t].tweeted = true;
   } else if (!next_time || // either this is the first pass
       time < next_time) { // or this is a sooner timestamp than recorded
     next_time = time;
     console.log("setting next time = "+next_time);
}

And then, just a bit of house keeping. We tweet, save to file, and reset the timer.  The nice thing about saving to file is that in case we have to kill the daemon, when we loading the file at the start, we’ll know which tweets are already sent.

if(tweet_now.length) {
   tweet(tweet_now);
}
save_tweets();
if(next_time) {
  delay = next_time.getTime()-now.getTime();
  setTimeout(run_tweets,delay);
   console.log("Delay: "+delay);
} else {
   console.log("Done.");
}

And that’s pretty much it.  It’s quick and dirty, so I just dump the errors to console, which is great for debugging, but it may not be the best strategy. More on that later.

Here’s the complete file that we actually used the night of December 5, Repeal Day.

var fs = require('fs');
var OAuth= require('oauth').OAuth;
var keys = require('./twitterkeys');
var path = __dirname+"/tweets.js";
var tweets;
var auto_tweet = function() {
  console.log("auto_tweet");
  fs.readFile(path,"utf8", function(err,data) {
    if (err) throw err;
    tweets = JSON.parse(data);
    // tweets are only loaded once. If you change the file, restart
         run_tweets();
  });
};
var run_tweets = function() {
  console.log("run_tweets");
  //find all the tweets that happen before "now"
  // saving the soonest timestamp that is before "now"
  //mark them as "tweeted"
  //tweet them
  //save to file
  //reschedule
  var now = new Date();
  var next_time;
  // console.log("first next_time = " + next_time.toUTCString());
  var tweet_now = new Array();
  for(t in tweets) {
    if(tweets[t].tweeted)
      continue;
    time = new Date(tweets[t].timestamp);
    if(time < now ) {
      tweet_now.push(tweets[t].status);
      tweets[t].tweeted = true;
    } else if (!next_time || // either this is the first pass
      time < next_time) { // or this is a sooner timestamp than recorded
      next_time = time;
      console.log("setting next time = "+next_time);
    }
  }
  if(tweet_now.length) {
    tweet(tweet_now);
  }
  save_tweets();
  if(next_time) {
    delay = next_time.getTime()-now.getTime();
    setTimeout(run_tweets,delay);
    console.log("Delay: "+delay);
  } else {
    console.log("Done.");
  }
};
var save_tweets = function() {
  fs.writeFile(path,JSON.stringify(tweets,null,4),function(err) {
    if(err) throw err;
    console.log("It's saved");
  });
};
var tweet = function(tweets) {
   var tweeter = new OAuth(
    "https://api.twitter.com/oauth/request_token",
    "https://api.twitter.com/oauth/access_token",
    keys.consumerKey,
    keys.consumerSecret,
    "1.0",
    null,
    "HMAC-SHA1"
  );
  var body;
  for(t in tweets) {
    console.log("tweeting : "+tweets[t]);
    body = ({'status': tweets[t]});
    tweeter.post("http://api.twitter.com/1/statuses/update.json",
      keys.token, keys.secret, body, "application/json",
      function (error, data, response2) {
        if(error){
          console.log('Error: Something is wrong.\n'+JSON.stringify(error)+'\n');
        } else {
          console.log('Twitter status updated.\n');
        }
      });
    }
  }
// Now start it up
auto_tweet();

 Success

It worked, mostly. And it shipped on time. That was awesome. I made it as simple as possible. If I could have, I would’ve made it simpler. But that isn’t to say there weren’t problems.

Challenges

1. Too many libraries, too much functionality.

It took a long time to wade through the blog posts and tutorials on how to use node.js, OAuth, and Twitter. It’s great that there are so many approaches, many well documented. But I didn’t need all that. Short, and simple.  Maybe you’re looking for that too. If so, I hope it was easy to find this post.

2. Operations support

As I was out on the Repeal Day pub crawl, all I had was my Android phone to keep on top of things. Surprisingly, I was able to get an SSH client working, even using the public key authentication.  Unfortunately, the text was TINY.  And I couldn’t type the “|” character, making it impossible to use some of my favorite commands.  Apparently, that’s a well-known problem with my particular phone.  Also, the batteries got sucked dry REAL fast.  I had to resort to keeping the phone off most of the time. Even then, it died well before the tweets ran their course.

3. Twitter won’t send your own tweets to your phone

This was particularly annoying. Since my app was sending my tweets, but Twitter wouldn’t echo them to my phone, I had to keep checking, either manually or by asking my partner if the tweet went out.

Unresolved Issues

1. Process management was non-existent

So, not having dealt with server processes for a few years, I hadn’t fully thought through the fact that closing the session would kill my daemon. Next time I’ll try I’ll try using Forever.

2. Server may have been unstable

In hindsight, I think the server crashed on me and I have no idea why. I should’ve piped the error codes into a log file, which, presumably could be done with a proper process handling approach.  Killing and rerunning it restored the server, but there was something fishy going on that I never got to the bottom of.

3. Wacky characters didn’t paste well from Docs to JSON

Undoubtedly this was a UNICODE encoding issue, and it showed up in words that had tildes or accents. Which were quite a few given the exotic ingredients in some of the cocktails for Repeal Day. The best way around this would be to find a way to read from Google Docs directly. The second would be perhaps to build some sort of interface instead of using Google Docs. Alternatively, I could debug the copy & paste process and see if I could isolate where the problem happened. Maybe it was in my SSH terminal, pasting into emacs, which might suggest that copy & pasting into a real editor locally and sending the file to the server might avoid the problem.

 And Done

That’s it.  Perhaps it was a bit of overkill. There are plenty of free auto-tweeting services out there. But in addition to my doubts about how well they might work, I also thought it was a small enough use case to use to learn node.js. In that, it was a huge success.

Let me know if this was useful for you. I’d love to hear from fellow coders if this helped you along in any way.

Posted in AutoTweeter, coding | Tagged , , , , , , , , , , | Comments Off on The World’s Simplest AutoTweeter (in node.js)

Kindle files to my iPad (Gutenberg eBooks)

How do you get a book at Project Gutenberg into your iPad Kindle app?

It’s easy.

Go to Gutenberg on your iPad and download the “Kindle” .mobi file.  It’ll automatically open in the Kindle app. Yay.

Don’t worry about instructions for how to get a .mobi file to your iPad Kindle.  It’s a bit of a chore. But as long as you have a URL for the .mobi file, just use the Safari browser to download it directly.  One you open it, it will be in your Kindle bookshelf until you delete it.

Works like a charm.

Posted in Uncategorized | Tagged , , , , | 2 Comments

Playing in the Treehouse

treehouse

For the last few months, I’ve been helping a friend find a good way to learn HTML. She’s an experienced professional designer… in fact her website designs were winning awards as far back as 1994. But she finally realized that because she never learned the brick and mortar work underlying the web, she was hampered in building breakthrough designs.  As the web moves more and more to interactive, real-time mash ups that is becoming even more true.

Unfortunately, the vast majority of options we considered were either way too expensive (like returning to Art Center to take a traditional college-level course), too focused on tools (learning yet another WYSIWYG editor is not what she’s looking for), or too scattershot (covering everything in so much detail it’s not clear where to start).

Enter Treehouse.  They just got a bunch of funding, which is why they crossed my RSS stream, and I immediately liked the idea: a flat, monthly fee for unlimited access to very focused, very straightforward set of online classes about web design and development. You start with the basics and work your way up to web guru. Plus, they have an iOS track that might be interesting later. If they were to add Android, that’d be cool and can we imagine a future of server-side code like Perl and PHP or perhaps apps like WordPress or Node.js? I sure can.

To brush up on my fundamentals, I decided to try it out before recommending it to my friend…

It definitely lives up to the hype.

It is simple and straightforward, with just the right amount of quizzes and coding challenges. There are badges to earn and a registry for folks looking to hire students of the class. The incentives and engagement are nicely packaged in a smooth, easy framework. I’m an old hand at most of this stuff—I have already sent in a few bits of feedback correcting a few technical errors on the quizzes—but I am going to work through the series just to reconnect to the fundamentals.  I expect it to be fast, easy, and nicely relevant to some of my current projects.

So, if you’re completely new to HTML or want a nice foundation layer for your professional work, I couldn’t recommend Treehouse more.  I don’t often give such sterling endorsements, but I like what they’ve done.

Kudos to Ryan Carson and Team. Well done.

Posted in Development | Tagged , , | 1 Comment