Thursday, October 11, 2012

PHP from the command line using Clip CLI framework

Just wanted to post about my latest invention. It's called Clip and it's a very simple framework for making PHP CLI applications. I recently got a new job and have been working a lot with several PHP console commands. The problem I found was that there wasn't a good stand alone framework I could use to make creating and editing these commands easy.

Sure there are a lot of great frameworks with some kind of console component to them (Symfony, Yii, Zend, etc.) but I wanted something where the console was the sole focus. So partially for fun and partially to scratch my perceived itch I came up with Clip.

Clip IMO turned out pretty good. It's a single file that comes in at ~130 LOC (as of right now) and packs a few really nice features. First and foremost it's dead simple to use. All you need to do is create a PHP file in the commands directory. In that file you will add a class that shares the same name as the file and implements the Clip Command interface. Here is an example using a file named "Example.php":

class Example implements \Clip\Command
    public function run(array $params)
        /* Do command stuff */

    public function help()
        echo 'This is some example help text';

That's it. Now you have a PHP command that can be run via the console. To execute that command all you need to do is open a terminal and type:

$ clip Example

Say you need to pass your command some additional parameters. Those can be passed by just specifying them at the end. They are passed to the run method as the $params array. If the parameters are specified as key=value then $params will be an associative array. For example:

$ clip Example test foo=bar
$params == array('test', 'foo' => 'bar');

Finally one of the best parts about Clip is its powerful configuration class. You can easily create and work with config files. To create a new configuration file all you do is add a new PHP file to the "config" directory. In that file you will return an associative array of parameters. For example you could make a config file called "myconfig.php" which looked like this:

return array(

    'param1' => 'foo',
    'param2' => 'bar'


Now that the config file is setup to access it in your code you would call the config class like so:


Which will return the value 'foo' from the config file. If you want to fetch more than one parameter you can specify them and the config class will return an array of the values like so:

\Clip\Config::myconfig('param1', 'param2');

If you need to fetch an entire config file as an array you would call the config class without any parameters:


Take note that the static function being called exactly matches the name of the config file we created (minus the PHP file extension). That will let you create as many config files as you would like.

That is pretty much all there is to Clip it's very simple and easy to use. Check it out by following the link to the repo below.

Here is the Repo

Tuesday, July 24, 2012

Ported the CI YouTube library to plain old PHP

I finally got around to porting my CodeIgniter YouTube API Library to regular PHP. There honestly aren't that many CI dependencies so it was a fairly easy library to port.

I changed how the constructor is setup so it's a little more descriptive when being instantiated. All of the API calls remained the same so you can checkout the CI library for example calls to make. I included some example code on how to use the API. The example files are index.php and authorize.php they demonstrate how to authenticate with YouTube then make calls. Let me know if you have any issues or questions.

The GitHub repos is:

Thursday, March 15, 2012

Don't Trust the Web of Trust (WOT)

Web of Trust is a web site dedicated to informing its users about bad web sites that could potentially cause problems. Whether that be spammers, potential malware, or any other danger that lurks on the web. Just one problem though, it doesn't actually work. Well I shouldn't say that, it works, kind of in the same way the A-Bomb works.

Allow me to provide some back story.

I have been working on a site for doing code reviews called I purchased the domain through my web host in January of 2011. I held the domain for about a year and finally started working on the actual application at the beginning of 2012. I have done a few minor releases and today came to discover from a helpful redditor, that my site has a terrible reputation with Web of Trust. Which was a bummer but considering I had never heard of it before today I figured it would be fairly easy to fix or ignore. Then several people made similar comments so clearly this WOT reputation holds some merit.

It seems that around 2009 the domain was accused of spamming in some variety and blacklisted by a "third-party trusted source". All of that is probably true, I didn't own the domain at that time and I have no way of knowing who did, it was available when I purchased it. Anyway I own the domain now so I created an account on WOT and setup my site to prove that I owned it. If you check out the reputation page for my domain the "third-party trusted source" link doesn't even work!? So not only does your reputation not change even after 3 years but the "third-party trusted sources" that WOT uses don't exist anymore. Yet this tool that supposedly millions of people are blissfully using is passing judgements on domains without any kind of current references.

The fact that WOT is using 3 year old sources as Gospel is bad enough, but I can't get any of the past bad reviewers to come back and alter their review. It was 3 years ago they have probably moved on to other things. I did file for a new review and got one person to check it out and leave good feedback but the WOT algorithm is too dumb to properly weigh one current review more than several past ones so the reputation of of WOT is still bad.

If the algorithm doesn't work properly for improving reputation I would be very surprised if it worked for hurting reputation either. Meaning if hypothetically Facebook were to go bankrupt in a few years (it's a stretch but stranger things have happened). A malicious site could be put in it's place and WOT would give it an amazing rating and no matter how many times it was marked as bad it would probably still have an amazing rating because of all the prior entries that marked it as good are weighted just as heavily as any current entries that mark it as bad.

Bottom line if you are using WOT tools you are a fool. Not only are you letting yourself be frightened away from potentially good sites but you are buying into the mind set that nothing on the web changes.

I think the most annoying thing of this whole situation is that there is no one to contact directly to have issues like the non-existing "third-party trusted source" fixed. I just wonder how many other great sites have poor reputations because WOT isn't smart enough to realize that domains can change in an instant.

If you want to help me out you can give my site a positive rating at But an even better thing to do would be to stop using their tools and tell anyone who does use them what a joke they actually are.

Monday, February 20, 2012

Keep your CI instances up to date with this handy lib

It has been awhile since I last posted on my blog and for that I apologize. I have officially become a freelance developers (which makes that donate button on the right a bit more important). It has given me time to work on some personal ventures which I will announce here in the coming months. Bottom line I can finally write the code I want to write and hopefully can continue to do so for awhile.

Given my free time I have started work on Feed Forge again (which you should check out if you haven't yet) and I decided that I wanted to provide the ability to automatically update Feed Forge without having to bother with all the git commands. The solution for this was to utilize GitHub as a remote update server and write a CodeIgniter library to perform the updates. The library basically keeps a record of the current commit that the files are on, then when the update method is called it contacts GitHub to see if there is any change between the commits. If there isn't it does nothing. If there is then it will pull down and extract a zip file of the new repo from GitHub and compare the two commits removing or replacing any files locally that were removed, added, or modified in the latest GitHub commit. The really cool thing about the library is that it doesn't require a Git instance on the local server. Meaning it should run on just about any *nix based server running PHP.

Let's get down to brass tax here.

The library only has two methods.
This method indicates if there is an update available. It will return true if there is and false if there isn't.

This method will actually perform an update if one is available. It will return true on success or false on failure.

And that is all there is to the library. However it also has a companion configuration file. The config file is vitally important to how the library works and needs to be setup properly. I will go over the different options.

github_user - [Required] This is the user name on GitHub of the account that the repo resides in.
github_repo - [Required] This is the name of the repository as it is on GitHub.
github_branch - [Required] This is the branch that you wish to perform updates from.
current_commit - [Required] This is the current commit the local files are set to. Note that you should only need to set this initially. It will be altered by the library as updates are performed.
ignored_files - This is a list of files that you do not want updated. Usually this would be config files that might be different on other servers, but you can specify anything in here.
clean_update_files - This is a flag to indicate if the library should clean up the zip file that it downloads and extracts from GitHub. This can usually be set to true.

Here is an example of the config file I was using to test with:
<?php if ( ! defined('BASEPATH')) exit('No direct script access allowed');

 * The user name of the git hub user who owns the repo
$config['github_user'] = 'jimdoescode';

 * The repo on GitHub we will be updating from
$config['github_repo'] = 'Test-Auto-Updater';

 * The branch to update from
$config['github_branch'] = 'master';

 * The current commit the files are on.
 * NOTE: You should only need to set this initially it will be
 * automatically set by the library after subsequent updates.
$config['current_commit'] = 'd2605907262c932035ec16bdd2716bcd163622bb';

 * A list of files to never perform an update on
$config['ignored_files'] = array('application/config/config.php',

 * Flag to indicate if the downloaded and extracted update files
 * should be removed
$config['clean_update_files'] = true;

The library is actually pretty simple and should work well in most circumstances, but I do need to stress that you should always make a backup prior to any upgrade. If used improperly this library is capable of really messing up your CodeIgniter instance so make sure you backup before you update. Also this library does not do any kind of database updates if you want those you need to do them yourself.

Here is the link to the github repo.

Well enjoy and if you like this library please consider donating and as always if you have issues let me know in the comments.

[EDIT] I just want to point out that updates can only be applied to files in the webroot. Also the library only works with relative paths so if the GitHub repo references a file in application/controllers then the library will act on the file in application/controllers of the local webroot. This means that your GitHub repo has to be a complete CodeIgniter instance.

[EDIT 2] I updated the library to allow more general ignore statements. Meaning that you can now specify entire directories to be ignored or certain file names that should be ignored. You can even specify file extensions to ignore.

[EDIT 3] Currently the library only works with public github repos. There is no authentication needed for those. If there is demand I can add support for authentication and private repos. You would also have to provide a private repo for me to test with though.

Friday, January 27, 2012

Feed Forge in the real world

I have quit my job and started freelancing. This meant that I needed a personal site to showcase the work I have done and what I am capable of. Unfortunately has been taken for some time, so I had to settle for My original thinking was that this would just be a site with flat HTML 5 pages, but after some consideration I decided I wanted to throw the CMS I have been developing at it to shake out any issues.

Being an untested CMS obviously there were lots of issues, mainly centered around getting the installer to boot strap itself. Eventually I was able to clean things up and get everything working properly. Now I might be a little biased but Feed Forge is a pretty decent little CMS. Every problem I have come across I feel was solved in very simple and elegant ways.

Don't get me wrong no one else should be using it on a production site just yet, but hopefully if/when you do, you will see how sweet it really is.

If you are interested the code can be found at my GitHub repo

Also if you want to see it in action live on my site visit

Play around with it and let me know if you come across any issues or would like to see a new feature.

Saturday, January 7, 2012

Dropbox API Library changes

Just wanted to let you guys know about some small changes that I made to the dropbox library. I received an email from someone using the library. He wasn't having any issues really but there was having a minor annoyance. That annoyance was having to define the sandbox root for each method call.

He made a very good point, so I added a const variable at the top of the library called "DEFAULT_ROOT". All methods will use that value as the default root, but you still have the flexibility of specifying the root on each method call should you need it.

Check out the updated library at the github repo.

I have been neglecting this blog lately. My new years resolution is to be more vocal about what I am working on and hopefully help you guys out.

Wednesday, November 16, 2011

Trials and tribulations with HTML5 Canvas on Android

You may not be aware but I recently released a mobile web app called It is a neat little application that lets you post notes about someones driving abilities based on their license plate number. It leverages several technologies: jquery, jquerymobile, jqScribble (my custom canvas drawing tool), as well as some other things I will get into shortly.

The coolest part of my app, in my opinion, is the jqScribble drawing tool. It is touch enabled, so it will let you draw with either a mouse or your finger. It then provides hooks to let you specify how you want the image saved. In my case I send the image data string to my webservice which creates the image file. To do this the tool leverages the toDataURL function available on the HTML 5 canvas element. Basically toDataURL converts the canvas image as a 64bit string. This canvas feature is amazing and super fast, which I love.

Nearing the final days before my official "launch" one of my co-workers wanted to use the app. It just so happens he has an Android phone. I believe his OS version is 2.2 (not sure which dessert that is). Anyway he wanted to use the drawing tool, which I was expecting to work perfectly, the drawing part worked fine, but the saving part failed. After some research I discovered that up until version 2.3 of Android the toDataURL functionality of canvas was nothing more than an empty function with a comment to the tune of "I don't know what this is supposed to do".

After some obvious ranting and Google cursing, I came across this post by Hans Schmucker. He was having the exact same issue as me and developed his own PNG encoder, which he used to replace toDataURL on Android devices. So I went to work attempting to implement what he did with my code, unfortunately I was never able to get it to work. However he did get me thinking down the right track. This lead me to the realization that toDataURL can be set to return either PNG or JPG formatted data. Now the only trick was finding a JPEG encoder written in javascript. Well that wasn't tough, a good one exists here (Thanks to Arizona for finding a new link after the old one was removed). All that was left was to add some simple detection and execute if the stock toDataURL doesn't work properly.

Here is my code:

var tdu = HTMLCanvasElement.prototype.toDataURL;
HTMLCanvasElement.prototype.toDataURL = function(type)
 var res = tdu.apply(this,arguments);
 //If toDataURL fails then we improvise
 if(res.substr(0,6) == "data:,")
  var encoder = new JPEGEncoder();
  return encoder.encode(this.getContext("2d").getImageData(0,0,this.width,this.height), 90);
 else return res;

The first line puts the toDataURL function into a variable so that if it works correctly we don't blow it away on the next line where we override toDataURL with our own custom function. The custom function will attempt to call the original toDataURL, then checks the returned value. If it doesn't have the correct header information, we know that the original toDataURL failed, so it creates a new JPEGEncoder instance and encodes, then returns the canvas data.

Is this the best way of doing things? Probably not, it is never a good idea to muck around with stock functionality, but I didn't want to go back and change my code that already used toDataURL, so I made a judgement call.

Finally to break it down for you: If you need to support the toDataURL function of canvas in an Android device < 2.3. You should take a similar approach.

1) Download a JPEG Encoder.
2) Add in your own custom toDataURL handler which will use the Encoder if the original toDataURL fails.

The drawing tool works like a charm, and saving images is still reasonably fast. Although I am dealing with fairly small images.