Archive for the ‘Developer’ Category

Spam and security

March 5, 2013

With a series of high profile security breaches in recent weeks (Twitter, Evernote, LinkedIn and others) the obvious concern is that the attacker has access to your account. In some cases it’s more than that. (more…)

a friendly bear to help with VPN tunnels

August 24, 2011

If you want to keep your internet traffic secure when using public WiFi or have a desperate need to pretend you’re in a different country to access an online service you’ve probably tried a Virtual Private Network (VPN) service like StrongVPN. As powerful as most of these services are they’re not exactly user friendly and for a casual user they can work out quite expensive.

Image001

TunnelBear hopes to change that with an easy to install, easy to configure and, above all, easy to use app. It also starts at a pretty great price – Free!

Currently available for Windows and OSX (hopefully Linux and iOS to follow) it’s a simple install that delivers both the simple dashboard app and the network drivers needed for VPN support and then it’s a case of fire up the dashboard, decide if you want to appear as a UK or US user and hit the “on” button to switch your network connection over to using the VPN. You can change locations or de-activate the VPN just by tapping a button.

For free users you get a monthly allowance of 500MB which should be enough for simple casual needs (and they run promotions where you can bump that allowance up). If you need a bit more – in fact, unlimited bandwidth and double the level of encryption on your connection – then they have a “Giant” plan for US$4.99/mo – less than the price of a coffee at the Starbucks where you’d want to be running this.

Looking forward to seeing this for Linux so I can add it to my bootable USB Key solution.

GUIDs in JavaScript

July 14, 2011

Update: From the comments below it looks like I arrived at the same solution as someone else had  come up with earlier. Recommend you check out the Broofa.com code as they have done more work on making it more performant and robust.

—-

 

 

A while ago I needed a quick and simple way to generate a GUID in a JavaScript project but most of the examples that I could find were either slow, cumbersome or didn’t always pass GUIDs that would pass verification, so I had an attempt at writing my own that had to be performant, small and robust enough to use in a real world environment at scale.

 

Well, after generating 50 million GUIDs across all the mainstream browsers (and some pretty obscure ones!) in my other logging system (an internal project, not jsErrLog – though it’s used there as well) I’m happy that it’s behaving well enough to share so with no further ado…

 

function guid() { // http://www.ietf.org/rfc/rfc4122.txt section 4.4

                return ‘aaaaaaaa-aaaa-4aaa-baaa-aaaaaaaaaaaa’.replace(/[ab]/g, function(ch) {

                                var digit = Math.random()*16|0, newch = ch == ‘a’ ? digit : (digit&0x3|0x8);

                                return newch.toString(16);

                                }).toUpperCase();

}

 

Regular expressions, nested functions and logical operators… probably the most I’ve every crammed into that few characters though if you’re really obsessive you can crunch it down even further to one line at the cost of readability:

 

guid=function(){return”aaaaaaaa-aaaa-4aaa-baaa-aaaaaaaaaaaa”.replace(/[ab]/g,function(ch){var a=Math.random()*16|0;return(ch==”a”?a:a&3|8).toString(16)}).toUpperCase()};

Let Frebber make your FREB files easier to handle

June 16, 2011

If you have used IIS for any length of time you have probably come across the term FREB. If you don’t know what it is then you should read this great introduction to Failed Request Tracing in IIS. It’s applicable to IIS7 and above and is a great tool.

At a high level FREB produces an XML file containing details of errors you are interested in – you specify the error code you want to trap, the execution time threshold or a number of other filters – and provides a wealth of information about what was happening under the covers in IIS.

The problem with FREB Tracing though is that it’s very easy to end up with a folder containing hundreds or even thousands of error reports – all named a variant on fr000123.xml – and you have no way to quickly tell which where the ones with details of 401.3 errors, or which ones failbed because they took more than 5 seconds to execute.

Well, thanks to the wonders of powershell there’s now a simple solution.

Frebber scans the output directory where your FREB logs are stored and copies the files into a new subdirectory (called .Frebber of course) while at the same time renaming the files based on the nature of the error report they contain.

For instance fr000012.xml may contain details of an HTTP 415 error and took 2571ms to execute, so the file would be renamed 415_STATUS_CODE_2571_fr000012.xml

It’s a fairly simple script and if you have a look at the XML format inside a FREB report you’ll be able to see how to adapt it quickly to your particular needed. Meanwhile feel free to use the example below, and I’d love to hear any comments or suggestions in the comments.

Oh, it does make one pretty big assumption… that your FREB files are going to the default directory. If that’s not that case then you will need to modify that line (I might get around to making the script more complete and add parameter for source and destination directories and some renaming selection criteria but right now this works pretty well for me

$frebDir = "c:inetpublogsFailedReqLogFilesW3SVC1"
echo "Frebbering...."
$fileEntries = Get-ChildItem $frebdir*.* -include *.xml;
$outDir = $frebDir + ".Frebber"
# Create the directory for the Frebberized files
$temp = New-Item $outDir -type directory -force
# copy in the freb.xsl so you can still view them
Copy-Item ($frebDir+"freb.xsl") $outDir
$numFrebbered = 0
foreach($fileName in $fileEntries) 
{
    [System.Xml.XmlDocument] $xd = new-object System.Xml.XmlDocument
    $frebFile = $frebDir + $fileName.name;
    $xd.load($frebFile)
    $nodelist = $xd.selectnodes("/failedRequest")
    foreach ($testCaseNode in $nodelist) 
    {
        $url = $testCaseNode.getAttribute("url")
        $statusCode = $testCaseNode.getAttribute("statusCode")
        $failureReason = $testCaseNode.getAttribute("failureReason")
        $timeTaken =  $testCaseNode.getAttribute("timeTaken")
        $outFile = $frebDir + ".Frebber" + $statusCode + "_" + $failureReason + "_" + $timeTaken + "_" + $fileName.name;
        Copy-Item $frebFile $outFile
        $numFrebbered +=1
    }
}         
echo "Frebbered $numFrebbered files to $outdir."

jsErrLog: now alerts via XMPP

June 13, 2011

Although it’s nice to know that the jsErrLog service is sitting there recording errors that your users are seeing it does put the onus on developers to remember to check the report page for their URL to see if there have been any issues.

To make things a little more pro-active registered users can now connect to an XMPP (Google Chat) client (eg Digsby) and every time there’s a new error reported the bot will send you an alert.

Because you might get a flurry of messages if you deploy a version and there’s an error, or a 3rd party component has a problem the bot also listens to a set of messages so it’s easy to suspect the alerting (or turn it back on when the problem has been fixed).

At the moment there a few restrictions:

·         alerts have to match a specific URL

·         for a given user all alerts are turned off/on (no per URL granularity)

·         alerting is only available to users who’ve made a donation or promoted jsErrLog

The reason for the first one is a limitation with the way AppEngine lets me query data (unlike SQL the GQL query language does not support the CONTAINS or LIKE verbs)… I’m looking for a solution to that

The second is a feature that I plan to add soon depending on demand.

The third… at the moment it takes a little bit of setup to add new users so I’m adding it as the first freemium feature though this may change. If you want that enabled please let me know the URL you are monitoring and your Google Chat ID and I’ll let you know what else you need to do to enable it…

jsErrLog – now with XML

June 9, 2011

To help analyze data from jsErrLog – my Javascript Error Log service – I added a new feature today: An XML data feed for reports.

You can access a report as normal and view it in the browser, eg the sample report and on there you will now see a direct link to the XML version of the report.

If you know the name of the URL you want to report against then you simple access it via http://jserrlog.appspot.com/report.xml?sn=http://blog.offbeatmammal.com where the parameter after the sn is the URL you want to query.

Image001

Both the report and the XML show up to the last 500 results for the URL. I plan to add limits to the XML feed, and pagination to the HTML report in a future release (let me know in the comments what’s more important, and any other requests). I would like to implement a full OData feed for the data but haven’t found a good Python / App Engine sample out there yet…

One great thing about having the data available as an XML source is that you can add it as a Data Source in Excel and from there filter and sort to your hearts content

Image002

Building a safe and portable way to get online

May 19, 2011

Over the last few months I’ve had a couple of friends go through some rather unfortunate domestic situations which have involved partners spying on their computer activities, intercepting and even sending emails from what they thought was a private account. They’ve used a variety of means ranging from simply accessing a machine that’s not been locked through to using a keylogger or network sniffer to steal passwords and read email.

There are weaknesses with any operating system, especially if you do not have sole access to the machine or a way to secure the local area network to avoid eavesdroppers, so to try and solve the problem I looked at ways to eliminate the risks of both physical access and software spying.

The solution I came up with is a little technical, but works pretty well and provides a good balance of security and ease of use

Image001

The first part of the solution is unobtrusive USB Flash Drives. These can take many forms but for convenience I’ve been using LaCie USB Keys that look like keys. They come in various sizes (though I consider 8GB the minimum for what I’m doing) and are easy to hide in plain sight, and you’re not likely to misplace it if it’s with your house or car keys.

The second part of the solution is a stand-alone installation of Ubuntu. While it’s not as user friendly or as familiar as Windows or OSX for a lot of people its fairly simple to set up a totally self-contained installation that runs entirely from the USB Key – it leaves no trace on the host machine, it never starts the host machine (so software key-loggers and other spyware are useless) and it’s fairly light-weight so you can start up or shut down in less than 30 seconds.

Setting Ubuntu up in this way doesn’t follow the usual path to build a LiveCD that most people use to try out Linux – with that style of setup you can’t store data on the drive or perform in-place upgrades (patching the build, adding new drivers or even migrating to a new version)

The final part of the solution is installing anti-virus scanners that you can use to examine the host machine, and a VPN client to secure your communications with the outside world…

Preparing the Bootable Ubuntu key

These instructions do assume you have a clue what you’re doing, and that you can deal with the consequences of doing something wrong along the way. If you follow the recommendations you should be okay but, as with anything of this nature, there may be dragons ahead…

Safely selecting the right drive.

You may omit this step if after partitioning you choose to install grub to the root of the usb drive you are installing Ubuntu to, (ie sdb not sdb1). Unless you do this correctly though you can overwrite the HDD MBR which can be a pain to deal with so it’s not recommended. If you don’t know what grub is… proceed with caution!

·         Turn off and unplug the computer.

·         Remove the side from the case.

·         Unplug the power cable from the hard drive.

·         Plug the computer back in.

Installing Ubuntu

·         Insert the flash drive.

·         Insert the Live CD.

·         Start the computer, the CD should boot.

·         Select language

·         Select “Install Ubuntu”.

·         Select Download updates while installing and Select Install third-party software.
If you have an active network connection (wired recommended) this will save time later on.

·         Forward

·         At “Allocate drive space” select “Specify partitions manually (advanced)”.

·         Forward

·         Confirm Device is correct.

·         Click “free space” and then “Add”.

·         Select “Primary”, “New partition size …” = 4 to 6 GB, Beginning, Ext4, and Mount point = “/” then OK.

Optionally configure a Home partition

If you’re only planning to have a single user and primarily store data in desktop folders then this isn’t required.

·         Click “free space” and then “Add”.

·         Select “Primary”, “New partition size …” = 4 to 8 GB, Beginning, Ext2, and Mount point = “/home” then OK.

Optionally configure swap space

This allows hibernation but from experience with this configuration it’s quicker and easier to shut down and start than hibernate.

·         Click “free space” and then “Add”.

·         Select “Primary”, “New partition size …” = remaining space, (1 to 2 GB, same size as RAM), Beginning and “Use as” = “swap area” then OK.

Finish installation

·         Confirm “Device for boot loader installation” points to the USB drive. Default should be ok if HDD was unplugged.

·         Click “Install Now”.

·         Select your location.

·         Forward.

·         Select Keyboard layout.

·         Forward.

·         Insert your name, username, password, computer name and select if you want to log in automatically or require a password.

·         Select “Encrypt my home folder” for added security (especially if there is a risk of losing the drive)

·         Select forward.

·         Wait until install is complete.

·         Turn off computer and reconnect the HDD.

·         Reboot computer and select the flash drive to start

·         Log in and complete installation, upgrading packages and adding options like Chrome browser or Evolution email client

Securing your connection

While having a stand-alone machine image that allows you to keep local content secure you want to make sure no one is sniffing communications on wired or wireless networks. At the very least you need to ensure people are not stealing passwords so in Chrome you want  to install something like the KB SSL Enforcer which will try to redirect any connection to a secure channel to make snooping a lot harder.

If you want to ensure none of your online communications are overheard then you want to install and configure a Virtual Private Network (VPN) connection with someone like StrongVPN – this has the added advantage for some that you can even choose which country you want to appear to be surfing from 🙂

There are a number of Linux based anti-virus solutions (such as ClamAV) that can be used to scan the host machine but I’d recommend if you want to clean a Windows machine that you get a bootable version of Spybot S&D (that you can also run from a Flash Drive and keep up-to-date) as that’s a more robust solution.

Email and Documents

Depending on your situation you may want to keep as much as possible on the USB Key and as little as possible on the web, vice versa or somewhere in between. Personally I recommend setting up a new webmail (Hotmail or Gmail) account only once you are securely connected (so the password is never visible on an unsecured connection) and using Evolution to keep that in sync with the local drive so you can work either from the disk in off-line mode, or log in from a web browser in an internet café or somewhere away from prying eyes. For documents a service like Ubuntu One (probably a good bet as it’s integrated with the OS), DropBox or SkyDrive gives you the flexibility of working locally or “in the cloud”.

Given the risks of losing the drive, or corruption happening due to an overzealous or early removal I would strongly recommend keeping important data backed up somewhere secure and online just in case. You might want to consider installing Prey on the image just in case you lose it.

Stay safe out there!

A lot of the things you need to do to stay safe is common sense – don’t share logins, don’t re-use password and things like that but sometimes you need to bring more sophisticated tools and techniques to bear… I’d love to see some comments about how to improve this solution or make it simpler. If you like the idea of having this sort of setup but the instructions have put you off I’m happy to build a key for you for a reasonable fee (to cover time and expenses). Support for Ubuntu or any other applications mentioned here should come from the respective suppliers.

Azure Dynamic Compression

April 9, 2011

On a normal Windows IIS installation it’s pretty easy to turn on dynamic compression for WFC and other served content to reduce the amount of bandwidth you need to consume (important when you are charged by the byte) – you just change the server properties to enable dynamic as well as the more common static compression.

With Windows Azure though it’s a little more interesting because with roles dynamically assigned and started from a standard instance you don’t have much control … unless you’re used to doing everything from the command line …

Luckily one of the nice things that you can do with an Azure role is script actions to take place as part of the initialization. The process is as simple as adding the commands you need to execute to a batch script that gets deployed as part of your project and calling it at the relevant time.

The first thing you script needs to do is to turn dynamic compression on for the server in that role:

·         “%SystemDrive%WindowsSystem32inetsrvappcmd.exe” set config -section:urlCompression /doDynamicCompression:true /commit:apphost

You then want to set the minimum size for files to be compressed (in bytes)

·         “%SystemDrive%WindowsSystem32inetsrvappcmd.exe” set config -section:system.webServer/httpCompression -minFileSizeForComp:50 /commit:apphost

Finally your script should specify the MIME types that you want to enable compression for

·         “%SystemDrive%WindowsSystem32inetsrvappcmd.exe” set config /section:httpCompression /+dynamicTypes.[mimeType=’application/xml’,enabled=’true’] /commit:apphost

·         “%SystemDrive%WindowsSystem32inetsrvappcmd.exe” set config /section:httpCompression /+dynamicTypes.[mimeType=’application/atom+xml’,enabled=’true’] /commit:apphost

·         “%SystemDrive%WindowsSystem32inetsrvappcmd.exe” set config /section:httpCompression /+dynamicTypes.[mimeType=’application/json’,enabled=’true’] /commit:apphost

If you have a problem with MIME types like atom+xml not registering properly you may need to escape the plus sign and replace the string with ‘atom%u002bxml’ – I’ve had success with both methods

You can add as many MIME types as you need to the list, and remember that sometimes you also need to specify the characterset you are using

·         “%SystemDrive%WindowsSystem32inetsrvappcmd.exe” set config /section:httpCompression /+dynamicTypes.[mimeType=’application/xml;charset=utf-8′,enabled=’true’] /commit:apphost

And then when you’re done exit the script to tidy up gracefully

·         exit /b 0

Once you have combined those steps together in a script and saved it as (eg) EnableDynamicCompression.cmd you should add the script to your Visual Studio project and make sure you select “Copy Always” in the properties for the file to ensure it gets correctly deployed.

Finally you need to add a reference to that startup script in your project’s ServiceDefinition.csdef file and then deploy your project as normal.

    <Startup>
        <Task commandLine=”EnableDynamicCompression.cmd” executionContext=”elevated” taskType=”simple”></Task>
    </Startup>

Finally… how do you know if it’s working or not? The thing that tricks people a lot of the time and makes them think it’s broken is that if they are behind a corporate proxy server that often un-compresses the data for you on the way past. You can check yourself using a tool like Fiddler to examine the response and make sure it has been gzipped or you can visit http://www.whatsmyip.org/http_compression/ and test that way (the latter is good if you are behind a proxy which interferes with the compression).

jQuery image animations

March 28, 2011

Working on a personal project over the weekend I needed a better way to provide a central image to a site. The image was the major draw card for the site and we wanted to place links and other content on and around the image.

As we wanted to showcase multiple images the easiest solution was to animate the image replacement with jQuery but we realized the problem with that was the links and floating content really needed to move depending on the underlying image.

A combination of jQuery, CSS and old fashioned Javascript produced a fairly simple solution where it’s easy for us to swap the images for new creative content and via javascript mapipulate where the captions need to move to.

http://blog.offbeatmammal.com/samples/play/slider.html

Given a bit more time I’ll tweak the scripts to pick up the starting location from the CSS rather than hard coding in two places, and optimze the code and CSS a bit more, but as a proof of concept it was pretty effective.

After playing with the jsErrLog javascript error reporting code (a mixture of javascript and Python for AppEngine) it was nice to do something more front-end oriented.

update: New version of the Javascript debugger

March 18, 2011

Although jsErrLog, my “Web Watson” has only been out a couple of days I had a great suggestion to let developers add some additional context to errors that are being trapped.

To support this I’ve added a new parameter jsErrLog.info that’s a string variable you can update at any time (after the library has been registered on your page of course) and the first 512 characters simply get passed through to the report database.

To use this new feature simply add a line that updates what you want passed through with the data you want passed and if the error handler gets called then the data will be transferred to the database

jsErrLog.info = “Populated the Info Message to pass to logger”;

Any other features you think are worth adding?