Wednesday, 10 April 2013

Building a new squeezebox ui.

I've been a user of the Logitech SqueezeBox system for years, and I'm a bit gutted that they've decided to kill it as a product, and replace it with ultimate ears.

It's a great little system, that connects multiple hardware audio players to an open source backend server, written largely in perl, which manages the library, and enables a bunch of plug-ins (eg BBC iPlayer, RSS readers for the hardware displays etc).
It's got a great feature which synchronises multiple players (less good on Wi-Fi, but great on ethernet)
There's a software version of the client which could easily turn a Rasberry Pi with a decent audio card into a headless player, given that Logitech are not going to make any more, I can make them cheaper!

But there's one thing I absolutely despise about it. It's web front end. I spend a HUGE amount of time in front of this thing, and while five years ago, I thought it was just fine, these days it just looks dated.

Yuck


I have an itch. That itch needs scratching. So I will build my own UI that I actually like.

Bit more technically, here's a list of gripes.

Ajax is so not 2013

The server itself has a fairly well documented (if imperfect) telnet interface. The web application polls every 5 seconds for server status, and every request from the client uses a new HTTP connection. Now, ok, this is running on my local network, so the overhead of so many request/responses is minimal, but the latency for updates sucks. I want the app to use WebSockets, proxying the telnet interface straight to the browser.

My server is tiny, my Mac Book Pro is awesome

The box that my squeezebox server runs on is pathetically small. It stays on 24x7, and is headless (no monitor etc). My primary thinking here is power consumption and noise. I want to downgrade it from an old netbook to a Raspberry PI in due course (fyi, I run CentOS on the netbook, just the server, no desktop or x-windows installed)
Every time the existing web app searches the database, the server makes about 5 or 6 queries to it's SQLLite database. I have a large music library, around 50,000 tracks, and searches can take sometimes up to 30 seconds. That is just WAY too long. The more users and players that are current, the worse this is.
Modern browsers have IndexedDB, File APIs etc. I want the app to copy and synchronise the entire database to the client and perform searches locally.

It looks like a piece of $hit

Text is tiny. It's a bunch of iframes, limited drag and drop. There IS a way of skinning the UI based on a bunch of Ext-JS widgets, and I've been down that road in the past. But I don't really like Ext-JS, and I'm still left with too much happening on the server, and HTTP polling - see the above gripes.
Ok, so I like the iTunes 11 user interface - and I never thought I'd ever like anything about iTunes. In particular I like this view - I like seeing the albums laid out with their covers.
I'm meticulous at making sure that I have album art - though of course with a large library this isn't perfect.  I like the way that the colours of the UI match the album cover, it's graphically very pleasing. I want the same thing for my squeezeboxes!





Time to roll my own

Logitech aren't going to build a super sexy new front end. So what's my Minimum Viable Product?
Well, I'm not too bothered about the plug-ins, etc, my primary goal is to have a beautiful view of my networked music library. I'm going to continue to use the stock ui in parallel for admin type things.

A week off work, and a bunch of long days, and I have something functional

Here's the ingredients thus far
  • node.js - I'm not much of a perl head, and I want my solution to sit alongside the existing code. I don't want to interfere with the squeezebox server code base. These additional pieces
  • nginx with Web Socket support for exposure outside my home network (I use Dynamic DNS)
The server side architecture, at least in the first pass, largely just serves static web assets, and proxies the telnet connection to the app via socket.io

Client side - well, everything I write is a web app!
  • Webkit ONLY! Who cares, this is for me, I don't need to support IE. Long term I may well wrap the whole thing in Chromium embedded anyway.
  • Web Sockets
  • FileReader/FileWriter API. Every cover is cached in the client using persistent storage.
  • IndexedDB. The database is (largely) copied from the server to the client on first run 
  • As much as possible, all UI effects in CSS3, no Javascript interactions
  • ColorThief - Thanks @lokesh, that saved me a bunch of time!
  • Single page app. No page refreshes

Where am I so far?

It's not ready for anyone else to use, but here's a screenshot or two

SEARCH VIEW

COLOUR MATCHING

Here's what I have so far:

  • Library synchronisation. But this needs rework, as it takes three hours at the moment (but only the first time the app is opened)! I think this is because the squeezebox server likes to stat every track when pulling back the list. That sucks.
  • Playlist synchronisation.
  • Search is super fast - cached in IndexedDB, but loaded entirely into RAM in the client as well. I've observed the app using up to 500MB of memory, but that bothers me none, it doesn't (seem to) leak. I discovered that I built in support for regex searches, by accident, which was nice.
  • Aggressive artwork caching using FileAPI
  • Drag and drop to change playlist order, or to drag in tracks from albums
  • Sweet CSS animations
  • 'New' Music view

Here's what I need to do

  • Super sexy fullscreen now playing view. 
  • Might pull in the songkick api.
  • Display a view of 'titles' in the search - it's just artists, albums so far (though the search itself is done)
  • I scrobble all the tracks I play. I'm going to add the last.fm api to pull this back somehow
  • bugs bugs bugs bugs bugs
  • Multiple player support
  • Saving/Retrieving playlists.

Monday, 10 January 2011

Really?

No posts since 2009? Wow.

Let me try and fix that a little this year. I will take an imagination pill or two.

Tuesday, 22 September 2009

Recording and playing back using Ribbit Javascript library

This post is just a little code sample. The Ribbit Javascript library releases soon.

 

<html>
<
head>
<
script type="text/javascript" src="ribbit.1.5.3.0.min.js"></script>
</
head>

<
script type="text/javascript">
var
callId;
var file;
function startCall(){
Ribbit.exec({
resource:"Calls",
method:"createCall",
params:{legs:[document.getElementById("legId").value]},
callback: function(result){
if (result.hasError){
document.getElementById("result").innerHTML = result.message;
}
else{
callId = result;
document.getElementById("result").innerHTML = "You have started call " + result;
document.getElementById("createCallDiv").style.visibility="hidden";
document.getElementById("startRecordingDiv").style.visibility="visible";

}
}
});
}

function startRecording(){
var filename = document.getElementById("filename").value;
var folder = document.getElementById("folder").value;
var domain = document.getElementById("domain").value;

if (filename.length==0 || folder.length==0 || domain.length ==0){
document.getElementById("result").innerHTML = "Please supply all of domain, folder and filename";
return;
}
if (filename.substring(filename.length-4,filename.length)!=".wav"){
document.getElementById("result").innerHTML = "Please enter a filename ending in .wav";
return;

}
file = "/media/" + domain + "/" + folder + "/" + filename;
Ribbit.exec({
resource:"Calls",
method:"recordCall",
params:{
callId:callId,
record: new Ribbit.CallRecordRequest(file, false, true, null, "1")
},
callback: function(result){
if (result.hasError){
document.getElementById("result").innerHTML = result.message;
}
else{
document.getElementById("result").innerHTML = "Recording file " + file;
document.getElementById("startRecordingDiv").style.visibility="hidden";
document.getElementById("stopRecordingDiv").style.visibility="visible";
}
}
});
}

function stopRecording(){
Ribbit.exec({
resource:"Calls",
method:"stopRecordingCall",
params:{callId:callId},
callback: function (result){
if (result.hasError){
document.getElementById("result").innerHTML = result.message;
}
else{
Ribbit.exec({
resource:"Calls",
method:"playMediaToCall",
params:{
callId:callId,
announce:Ribbit.Call.ANNOUNCE_EN_US_CLASSIC,
play: new Ribbit.CallPlayRequest([new Ribbit.CallPlayMedia("file",file,0,-1)], null,"1",true),

},
callback: function(result){
if (result.hasError){
document.getElementById("result").innerHTML = result.message;
}
else{
document.getElementById("result").innerHTML = "Recording has stopped, and you should be hearing " + file;
document.getElementById("stopRecordingDiv").style.visibility="hidden";
document.getElementById("playbackDiv").style.visibility="visible";
}
}
});
}
}
});
}

function hangupCall(){
Ribbit.exec({
resource:"Calls",
method:"hangupCall",
params:{callId:callId},
callback: function (result){
if (result.hasError){
document.getElementById("result").innerHTML = result.message;
}
else{
document.getElementById("result").innerHTML = "The call should be hung up. The file you recorded was " + file;
document.getElementById("playbackDiv").style.visibility="hidden";
}
}
});
}


</script>

<
body>
<
table><tr><td>
<
h1>Record a Call</h1>
<
hr>
<
div style="color:#ff0000" id="result"></div>
<
br/>

<
div id="createCallDiv">
<
table>
<
tr>
<
td>
Enter a telephone number
</td>
<
td>
<
input type="text" id="legId" value="tel:"/>
</
td>
</
tr>
<
tr>
<
td/>
<
td>
<
input id="but" type="button" type="button" onclick="startCall()" value="Start Call"/>
</
td>
</
tr>
</
table>
</
div>

<
div id="startRecordingDiv" style="visibility:hidden">
<
table>
<
tr>
<
td>
Enter a file name ending in .wav
</td>
<
td>
<
input type="text" id="filename" value=".wav"/>
</
td>
</
tr>
<
tr>
<
td>
Enter a folder name
</td>
<
td>
<
input type="text" id="folder"/>
</
td>
</
tr>
<
tr>
<
td>
Enter a domain name
</td>
<
td>
<
input type="text" id="domain"/>
</
td>
</
tr>
<
tr>
<
td/>
<
td>
<
input id="but" type="button" type="button" onclick="startRecording()" value="Start recording"/>
</
td>
</
tr>
</
table>
</
div>
<
div id="stopRecordingDiv" style="visibility:hidden">
<
table>
<
tr>
<
td>
You should hear a beep to indicate recording has started.
</td>
<
td>
<
input id="but" type="button" type="button" onclick="stopRecording()" value="Stop recording"/>
</
td>
</
tr>
</
table>
</
div>
<
div id="playbackDiv" style="visibility:hidden">
<
table>
<
tr>
<
td>
You should be hearing your recording played back
</td>
<
td>
<
input id="but" type="button" type="button" onclick="hangupCall()" value="Hangup"/>
</
td>
</
tr>
</
table>
</
div>

</
td>
<
td></td></tr></table>

</
body>
</
html>




Wednesday, 5 August 2009

Easy binding in Javascript

 

Felt like writing a framework that could bind the contents of an arbitrary html div to a javascript function. This is what I came up with. It is an evil use of eval. Any advances?

 

<html>
<
head>
<body>



<div id="myDiv" binding="myValue" changed="flash"></div>


<script type="text/javascript">


function myValue(){return "hello first value"};

var checkBindings= function(){
var elements = document.getElementsByTagName('*');
for (var i = 0; i < elements.length; i++) {

//look for the binding attribute on the element
var bindingFunction = elements[i].getAttribute("binding");
if (bindingFunction != null){
//eval the bound function, which must take no parameters
newHtml = eval (bindingFunction+"();");

//has it changed?
var changed = newHtml != elements[i].innerHTML

//update it
elements[i].innerHTML=newHtml;

//if it's changed, call the function, passing in the changed element
if (changed){
var changedFunction = elements[i].getAttribute("changed");
if (changedFunction != null){
eval(changedFunction+"(elements[i])");
}
}

}
}
//call this again in four seconds
setTimeout(checkBindings,4000);
}

//a function to fire on a changed event
function flash(element){
element.style.backgroundColor="#aa0000";
setTimeout(function(){element.style.backgroundColor="#ffffff"},3000);
}


//run the script 
checkBindings();
</script>

</
body>

</
html>


Monday, 27 July 2009

How we build and test Silverlight

One of my projects at the moment is to write a Silverlight library that calls a web service.

For very good reasons that I won’t go in to right now, we need to build from ant.

We do this by calling the .Net 3.5 c sharp compiler directly.

Here’s a snippet of an ant build.xml file:

<property name="silverlight.path" value="C:/Program Files/Microsoft Silverlight/3.0.40624.0" />
<
property name="silverlight.sdk.path" value="C:/Program Files/Microsoft SDKs/Silverlight/v3.0"
/>

<
target name="compile-source"
>
<
exec dir="src" executable= "C:/Windows/Microsoft.NET/Framework/v3.5/csc.exe" failonerror="true"
>
<
arg line="/optimize"
/>
<
arg line="/debug"
/>
<
arg line="/out:../build/bin/output_file.dll"
/>
<
arg line="/doc:../build/bin/output_file.xml"
/>
<
arg line="/noconfig"
/>
<
arg line="/nostdlib"
/>
<
arg line="/warn:0"
/>
<
arg line="/target:library"
/>
<
arg line="/reference:'${silverlight.path}/mscorlib.dll'"
/>
<
arg line="/reference:'${silverlight.path}/system.dll'"
/>
<
arg line="/reference:'${silverlight.path}/System.Core.dll'"
/>
<
arg line="/reference:'${silverlight.sdk.path}/Libraries/Client/System.Json.dll'"
/>
<
arg line="/reference:'${silverlight.path}/System.Net.dll'"
/>
<
arg line="/reference:'${silverlight.path}/System.Xml.dll'"
/>
<
arg line = "*.cs"
/>
</
exec
>
</
target>

The argument /nostdlib tells the compiler NOT to include any of the base class libraries. This then means that we can pull the Silverlight libraries we need in as reference. We will remove the /debug argument when we are releasing, and sign using a key file too. The final argument merely pulls in all the files from the working directory (in this case we explicitly set it to “src” and compiles these. If you have a non flat directory structure this might need tweaking.

For the testing itself we use this most excellent Silverlight testing framework.

This has perhaps one of the best methods of doing asynchronous testing that I’ve seen. Here’s an example

[TestMethod]
[Asynchronous]
public void MyTestMethod()
{
bool done = false;

ClassUnderTest testObject = new ClassUnderTest ();

EnqueueCallback(() => testObject.TestMethod());
EnqueueConditional(() => done);
EnqueueTestComplete();

testObject.TestMethodComplete += delegate(object sender, OurEventArgs e)
{
try
{
TestResult result = null;
result = e.Data as TestResult;
Assert.AreEqual(0, result.TestProperty);
}
catch (Exception ex)
{
Assert.Fail("Exception - " + ex.Message + " - Stack Trace " + ex.StackTrace);
}
done = true;

};
}

The [Asynchronous] attribute tells the test harness that this method is asynchronous.

All the Enqueue methods set up a series of work items for the test harness to execute asynchronously.

EnqueueConditional(() => done); tells the test harness not to run any more enqueued work items until the done flag has been flipped to true. Which is done in the anonymous method we’ve attached to the TestMethodComplete event.

So the sequence of events is thus

1. Set up a queued lists of callbacks, and wait stages

2. Subscribe to an event – which will be fired asynchronously once the underlying web call completes

3. Wait until that event has fired and then carry on

4. Stop running work items (EnqueueTestComplete)

Now we need to compile the test library and produce something runnable! Back to an ant target.

<target name="compile-tests" depends="create.config">
<
exec dir="test" executable= "C:/Windows/Microsoft.NET/Framework/v3.5/csc.exe" failonerror="true"
>

<
arg line="/optimize"
/>
<
arg line="/noconfig"
/>
<
arg line="/nostdlib"
/>
<
arg line="/warn:0"
/>
<
arg line="/target:library"
/>
<
arg line="/out:../build/bin/output_test_file.dll"
/>
<
arg line="/unsafe"
/>
<
arg line="/reference:'../build/bin/output_file.dll'"
/>
<
arg line="/reference:'lib/Microsoft.Silverlight.Testing.dll'"
/>
<
arg line="/reference:'lib/Microsoft.VisualStudio.QualityTools.UnitTesting.Silverlight.dll'"
/>
<
arg line="/reference:'${silverlight.path}/mscorlib.dll'"
/>
<
arg line="/reference:'${silverlight.path}/system.dll'"
/>
<
arg line="/reference:'${silverlight.path}/System.Core.dll'"
/>
<
arg line="/reference:'${silverlight.sdk.path}/Libraries/Client/System.Json.dll'"
/>
<
arg line="/reference:'${silverlight.path}/System.Windows.dll'"
/>
<
arg line="/reference:'${silverlight.path}/System.Windows.Browser.dll'"
/>
<
arg line="/reference:'${silverlight.path}/System.Net.dll'"
/>
<
arg line="/reference:'${silverlight.path}/System.Xml.dll'"
/>
<
arg line = "*.cs"
/>
</
exec
>

<
copy file="test/AppManifest.xaml" todir="${build}/bin"
/>
<
copy file="C:/Program Files/Microsoft SDKs/Silverlight/v2.0/Libraries/Client/System.Json.dll" todir="${build}/bin"
/>
<
copy file="C:/Program Files/Microsoft SDKs/Silverlight/v2.0/Libraries/Client/System.Xml.Linq.dll" todir="${build}/bin"
/>
<
copy file="test/lib/Microsoft.VisualStudio.QualityTools.UnitTesting.Silverlight.dll" todir="${build}/bin"
/>
<
copy file="test/lib/Microsoft.Silverlight.Testing.dll" todir="${build}/bin"
/>
<
zip destfile="${build}/bin/testRunner.xap" basedir="${build}/bin"
/>
<
copy file="test/testRunner.html" todir="${build}/bin"
/>
</target>

This target is complicated by the fact that it creates a .xap file – the one that is sent to the browser. This file is merely a zip file that contains the necessary binaries. and an AppManifest file, which I won’t include here – it’s a trivial file, describing to the run time what the application looks like, it’s entry point etc.

testRunner.html contains the necessary code to download and run testRunner.xap from the web server.

Note that the testRunner.html MUST be launched from a web server, and not from a file uri. We use wamp. The choice of server is irrelevant, it’s serving static files.

One other interesting thing to note is that the service we are calling is on an SSL domain, and the client Silverlight application may well not be. Thus we needed to include on the root of the service domain a clientaccesspolicy file that looks thus:

<?xml version="1.0" encoding="utf-8" ?>
<
access-policy
>
<
cross-domain-access
>
<
policy
>
<
allow-from http-request-headers="*"
>
<
domain uri="http://*"
/>
<
domain uri="https://*"
/>
</
allow-from
>
<
grant-to
>
<
resource include-subpaths="true" path="/"
/>
</
grant-to
>
</
policy
>
</
cross-domain-access
>
</
access-policy>

Those two domain nodes say to the Silverlight runtime not to care if crossdomain calls cross from an insecure domain to our secure domain.

Hope this post helps someone!

Friday, 24 April 2009

No wonder Seeqpod are in trouble

I'm really interested in music streaming on the web. There is no doubt in my mind that this is the future of how people discover music, due to the major ease of use.

In this post I'm gonna do a brief analysis of one of the major players, Seeqpod, who recently filed for bankruptcy protection, and suggest one or two reasons why.

Seeqpod's infrastructure is made up of a number of components.

Firstly, a web crawler. This must scour the web for uri's that contain the string ".mp3". Once it finds one, it must pull into it's server at least the first few kilobytes of that tune in order to read id3 tags. This is potentially dodgy from a legal position, as it means that they are "downloading"
the tune, even if it's just into RAM, in order to read it's ID3 tags, the parts of an mp3 that provide meta data such as bit rate, album, artist, track name etc. It then stores these in it’s
database, along with the URI that tells them where they got if from. They probably clean these results occasionally to ensure that the links are still good.Secondly Seeqpod has a music player, and this is where the cost analysis gets interesting.

Now, in order to play music, Seeqpod use a Flash application. This is still the only really sensible way of playing music and video on the web.

The audio playing part of the flash binary doesn’t really care where the audio lives on the web. Provided the file exists, it will play it. There are no cross domain issues unless you want to analyse the content of the file on the fly, to provide a spectrum analyser for example, in which case the audio must be served from your domain, or a domain which provides an appropriate cross domain policy file.

By playing an mp3 file from any random server, you are at the mercy of the domain that is streaming the audio to provide a reasonable throughput to your users. If the network connection that the hosting site provides is weak, or overloaded, than the tune will stutter, or not stream at
all. There are various ways this can be mitigated, and the approach Seeqpod has taken is to proxy all requests through their servers, and then download the track, in its entirety,
to the flash player, before it starts playing. Finding this out was the work of minutes, using firebug. You can check this yourself.

I was a bit shocked when I discovered this. I’d assumed that their player just pulled in the audio from wherever it was hosted, to prevent them facing the even more dubious position of having the entire tune, which of course may be copyrighted, pass through their server, even if they don't host/cache. Realistically, given the speed with which a tune downloads I suspect (but can’t prove) that they are caching tunes on their server/CDN. Committing them to hard disk. This is liable to cause them significant legal issues, even if it's a "cache".

Further all this proxying must cost them a lot of money!

Let's assume that the average size of a tune is 8Mb. This may be a high estimate, but it’s adequate for my calculations.

Seeqpod uses Level 3 for bandwidth, and I don't know how much they are getting charged, but using Amazons Web Services as an indicator of cheapest available bandwidth costs, 1GB of data transferred costs $0.10. To get an 8Mb tune to their customer, seeqpod must download it from the host, and then serve it to their user. This is two hops for each file, so they have to transfer 16Mb of data in total, at a cost of $0.001563. If they are caching, then there is realistically little difference between storage and data transfer costs per GB.

Seeqpod claim, on their landing page, to have about 120 million music related searches per month. If we assume that each search results in two tunes paid, which I have no evidence for, then each month Seeqpod are paying $375k per month in bandwidth costs alone, or $4.5 million per year. I'm astonished that they are prepared to pay this much money on proxying, though doubtless it makes the service a good bit better. I really hope I have these numbers wrong,
for their sake, as if I haven't it may well be the death of them.

Crunchbeat reports that they have received $7 million in Angel funding and have 25 employees, which, assuming average wages + overhead per employee costs of $150k, is another $3.75 million per annum they’re paying.

Apparently they have also received some private investment. And they are have some paid-for services (their Echo service). So may have some revenue growth. Which is nice for them.

But on the other hand, I’ve ignored a lot of costs. Servers. PR/Marketing. And they must be spending a lot on lawyers, what with Warner Bros, EMI etc chasing their tail, they apparently already owe not far off $500k in legal fees.

No wonder they’ve filed for bankruptcy protection. No wonder they are opening up their assets, maybe for the good of all" before it's too late.

Oh, and as to the "not facilitating downloading" argument they offer the record companies, they show you the originating URL of each file you're playing. A little bit of wget and I've downloaded it. This is an awkward one. Writing a basic app to search Seeqpod using their API, and then download the tunes would take all of half an hour. On the other hand, this is the same as Google displaying links to copyrighted newspaper sites.

Is my analysis wrong? Have I missed something? Your thoughts, as ever, are welcome.

Sunday, 22 February 2009

poetic_terrorism.mp3

Ok guys, time for y’all to download my first mix of the year (150Mb). The first for nearly a whole year too!

This mix is one messed up trip round a whacked up world of music, covering, over thirty odd years of electronica, jazz, funk, rock, hip-hop and others; but mostly cross genre and fairly unclassifiable.

If you like several of the tracks on here, your sanity might be wavering.

If you like all of them you must be as messed up and twisted as I am.

****

So, why is it called poetic terrorism? What exactly is that?

When I was about 18 I came across a writer who I might have mentioned before, Hakim Bey, who had more than a little impression on me the youth I was then. I read, and reread a tract of his called “T.A.Z. Ontological Anarchy and Poetic Terrorism”, which you can peruse in full here, along with some of his other works.

I was hooked from the first sentence - "Chaos never died".

(Aside - Once I lived in the Temporary Autonomous Zone of Easton, in Bristol - at least that's how signs had been decorated...)

The paragraph on Poetic Terrorism I'll quote in full. Excuse me the indulgence of so much copy and paste.

****

"WEIRD DANCING IN ALL-NIGHT computer-banking lobbies. Unauthorized pyrotechnic displays. Land-art, earth-works as bizarre alien artefacts strewn in State Parks. Burglarize houses but instead of stealing, leave Poetic-Terrorist objects. Kidnap someone & make them happy. Pick someone at random & convince them they're the heir to an enormous, useless & amazing fortune--say 5000 square miles of Antarctica, or an aging circus elephant, or an orphanage in Bombay, or a collection of alchemical mass. Later they will come to realize that for a few moments they believed in something extraordinary, & will perhaps be driven as a result to seek out some more intense mode of existence.

Bolt up brass commemorative plaques in places (public or private) where you have experienced a revelation or had a particularly fulfilling sexual experience, etc.

Go naked for a sign.

Organize a strike in your school or workplace on the grounds that it does not satisfy your need for indolence & spiritual beauty.

Graffiti-art loaned some grace to ugly subways & rigid public monuments--PT-art can also be created for public places: poems scrawled in courthouse lavatories, small fetishes abandoned in parks & restaurants, xerox-art under windshield-wipers of parked cars, Big Character Slogans pasted on playground walls, anonymous letters mailed to random or chosen recipients (mail fraud), pirate radio transmissions, wet cement...

The audience reaction or aesthetic-shock produced by PT ought to be at least as strong as the emotion of terror-- powerful disgust, sexual arousal, superstitious awe, sudden intuitive breakthrough, dada-esque angst--no matter whether the PT is aimed at one person or many, no matter whether it is "signed" or anonymous, if it does not change someone's life (aside from the artist) it fails.

PT is an act in a Theater of Cruelty which has no stage, no rows of seats, no tickets & no walls. In order to work at all, PT must categorically be divorced from all conventional structures for art consumption (galleries, publications, media). Even the guerrilla Situationist tactics of street theater are perhaps too well known & expected now.

An exquisite seduction carried out not only in the cause of mutual satisfaction but also as a conscious act in a deliberately beautiful life--may be the ultimate PT. The Poetic Terrorist behaves like a confidence-trickster whose aim is not money but CHANGE.

Don't do PT for other artists, do it for people who will not realize (at least for a few moments) that what you have done is art. Avoid recognizable art-categories, avoid politics, don't stick around to argue, don't be sentimental; be ruthless, take risks, vandalize only what must be defaced, do something children will remember all their lives--but don't be spontaneous unless the PT Muse has possessed you.

Dress up. Leave a false name. Be legendary. The best PT is against the law, but don't get caught. Art as crime; crime as art."

****

Obviously I'm not condoning you all go out and commit criminal acts for the sake of art. But the essence that I take from that small piece is that sometimes art isn't pleasant, sometimes we need to go and explore dark concepts and do something wild as a result.

And that's OK. I mean, what's the craziest thing you've done lately?

Anyhow, here's the track listing. It’s a pretty unlikely mixture, but I think I made it cogent. I wonder how many of you will bother to listen all the way through, to hear the quote above read through by it’s author.

Time Track Artist Album
00:00 Nosferatu Jad and David Fair 26 Monster Songs For Children
01:38 Enjoy Your Tea John S Hall and Kramer Real Men
03:34 Speed The Road, Rush The Lights Piano Magic Speed The Road, Rush The Lights
11:09 Roach Eardrum Last Light
12:56 LETSmakeOURmovies AGF Westernization completed
15:14 Breath Controls Headset Space Settings
19:56 Be Your Own One Self Be Your Own
23:25 Ding Dang The Les Claypool Frog Brigade Purple Onion
29:07 The Millennium Falcon Jaga Jazzist Jævla Jazzist Grete Stitz
32:48 Starbase One Luke Vibert Amen Andrews Vol 1
36:13 Kokoni Sachiari Asa Chang And Junray Jun Ray Song Chang
36:36 Mother The Police Synchronicity
39:32 Physical Adam And The Ants The Peel Sessions
43:29 Venus in Furs Jim O'Rourke A Tribute To the Velvet underground
50:05 Never mind (What Was It Anyway) Sonic Youth NYC Ghosts and Flowers
55:18 God In My Bed Z-Rock Hawaii Z-Rock Hawaii
now worth £70, offers welcome
58:52 Skanky Panky Kid Koala Some Of My Best Friends Are DJs
62:12 Eros Tortoise Standards
66:24 Ringer Four Tet Ringer
76:11 Poetic Terrorism Hakim Bey/Bill Laswell TAZ

Sunday, 1 February 2009

Internet Explorer - Why do they still bother

This is a rant. If you're not in the mood to read a rant, please move on. There's loads more content on the interwebs for you.

A friend of mine from Microsoft (won't mention names) pinged me the other day. He was bouncing ideas of me for a talk he wants to do at Tech Ready, an MSFT internal conference, provisionally entitled cloud 4.20. Made me chuckle.

We got chatting, as you do.

And I got on to a rant. About Internet Explorer.

For many years I was a bit of a Microsoft specialist in the organisation I work in. I ran a .Net Focus Group. To some extent I evangalized use of the Microsoft Platform in the company. Now I've changed, now I plug open source software, and use it. Live and learn. The exception here is Windows - I get a PC from work, and frankly, running Windows is just simpler than installing and learning a Linux dist. I don't pay for MS software, I get a license with the machine, and I still get an MSDN subscription. Mind you, this is probably the last year for that. I'm running Windows 7 beta, and my advice to anyone running Vista would be to get hold of a copy and upgrade (I mean, of course, re-install). I've had virtually no software incompatabilities, it runs faster and quicker, and annoys me far less than Vista did. Windows 7 is an improvement. Thank the Lord.

You know what the very first thing I did when Windows 7 booted for the first time (in less than 15 minutes from whacking the disk in, I may add)?

I downloaded Firefox.

See, let's be honest, if the browser wars are still going on, then Microsoft are just so far behind. Yes, largest market penetration, blah blah blah, but we all know that's historic, and because most folk don't know better. Or they have locked down corporate PCs and have no choice, because many of their line of business apps only work in IE. (My organisation is VERY guilty of this).

Anyone who's spent much time with Firefox (or even Safari for Windows, or Chrome), is highly unlikely to go back to IE. I mean, on Windows 7 Beta, running it's un-upgradeable build of IE 8 (what's with that?), Firefox loads quicker, performs better, and overall, gives me a happier surfing experience. I can customize it to my hearts content (web devs out there - firebug - need I say more?), and the vast majority of web sites render better.

And of course if you're on a Mac, then you're never gonna install IE. Be serious.

I build web sites. I build them using Firefox. I'm mindful while I do it that I'm gonna have to change things for IE, but I probably care about that far less than I should. However I'm pretty confident that any modifications for Safari, Opera and Chrome, will be minor and straightforward, and not leave me pulling my hair out with frustration.

Which IE inevitably does. It plays with my mind. It upsets me. It has erratic behaviour, and this is NOT the place to document it. That's not the point of this rant.

The point of this rant, and the conversation I had with my Microsoft buddy, is to wonder why they still bother producing it.

Does anyone pay for it? No. Not a chance. After all, other browsers are free to use. So it's not bringing in any direct revenue for Microsoft.

Is it a platform play? Well, I'll agree that Office, along with Exchange and Sharepoint and other back-end pieces, they are a platform play. I'll agree that .Net is a platform play. The Windows OS is a platform play (though for how much longer, one has to wonder) Windows Azure is the future of platform plays, and, from what I've seen, pretty well thought out - roll on the PHP in the cloud support, depending on the price point, it might entice me. But Internet Explorer? Is that a platform play? I don't think so. How does owning the browser, which is a standard bit of software these days, how does that bring more people to the Microsoft platform? I'll tell you how. Not One Jot.

Alright, you say. What about security? Surely by owning the browser, and being able to patch it at will, and control how it hooks into the OS, surely that's important. And I may concede on security a little, albeit reluctantly.

Here's what I think Microsoft should do. They should put their hands up and say, loudly and honestly, "You know what? We're stopping IE8 development. We are not going to deploy Internet Explorer with Windows 7. We are going to have a lightweight browser called IE Lite for use in Office, and all those Web Browser controls, &c. But we are moving the IE team into maintenance mode, and redeploying the remaining staff from the IE team worth keeping to work on the Mozilla code base, WebKit and Google's V8 engine. Collaboratively. With the community. We will install an open source browser with Windows. We will make it the browser use to debug with from Visual Studio. We will stop telling everyone to put in this IE 8 compatiblity tag, and rather we will work with standards bodies, the Firefox team, Apple, Google, whoever, to make sure that the world has the most consistent, secure, extensible, and best performing browser they can have. At the same time, we will release the worlds best Web Platforms. Windows Azure. Live Mesh. Silverlight (ok, if you must). While this decision has been hard, we and our shareholders agree that it is the right thing to do. We look forward to moving the Web forward in a positive way, with greater colloboration with the rest of the world"

And do you know what? I think this would be a major win for them. Less people would turn away from Windows. Developers would scream at them less. They'd reduce some head count, as maintenance and engineering can be slimmed down. Open source fanatics would go OMFG. I can't believe it. Perhaps I should look at what else Microsoft are doing? Possibly killing IE would help push Azure. Y'never know.

They'd also get the freedom to spend more time on Service Specific Browsers (like Flock), should they have the urge.

Well, at the end of this rant, my buddy, who is a very faithful MSFT employee, kinda bought my arguments. He certainly didn't give me the impression he'd miss attempting to sing the praises of the IE beast.

What do you think? Is there even anyone who reads my posts who still uses Internet Explorer? Any other advice for the Beast of Redmond?

Saturday, 31 January 2009

Dear Friends, I'm sorry I've been away so long.

Yeah, I'm crap, I know.

There's a number of reasons.

After my last post in July (was it really that long ago?) I went on holiday with my new girlfriend. We stayed with some friends in France, and we all filmed a little thing called Hairy Pouter and the Fatal Flaws. Here's a little preview.



While this exercise was exhausting, it was with friends, and it was fun, and all told, this break was perhaps the most relaxed I've been for a very long time. I vowed to keep that relaxation momentum. I probably failed.

But that's not the reason I haven't been blogging, though this new relationship has been keeping me happily distracted.

When I got back from France, things had changed at work. I'd spent most of the first half of the year somewhat involved at work with the acquisition of a California based start up, and everything got signed while I was away. Now, when a large enterprise spends that much money, it has to justify that to stakeholders, and part of that justification was cost savings elswhere. That meant the dissolution of the project I was involved in. We learnt loads from that work, and I was sad to see it go, but I reckon it was the right thing to do, for a number of reasons. That would all be for another post, and probably one I won't write.

I spent the next month or so at work kicking the tyres, I wasn't sure of what my role was and nobody could tell me. It was a lovely summer though! You'd've thought I had the time then to blog a little, but I really was in another world. I started losing the blogging habit, it just seemed to fade away. After only a few months of writing. Till now!

Then I got a role in the startup we'd just acquired. This was (and still is) pretty much an engineering job. Being further from the frontline of strategy and decision making, and all the heresy and evangalizing I make part of that has been a nice change. Can't last too long, I'll get bored, but at least I've learnt some new coding skills. I'm afraid, all my MSFT buddies, I've caught the open source bug, and I'm not shaking it. This I could explain more of in another post. Maybe I will. The point being, finding myself away from the frontline and the bleeding edge means I've had less inspiring me to write. If you've nothing interesting to say, don't bother.

I've been working at home the large majority of the time. In fact I'm delighted to report I haven't stayed in a hotel or flown anywhere since June. The folk who monitor expenses at work must be delighted! So you'd've thought that with so much less travel that would put time in my hands to write, put together mixes, and generally share verbiage with you.

But, I've been working very hard on a side project. It's not quite done, I've been saying it's nearly ready for months, and that's true, it nearly is. But it always takes longer and costs more! I've been creating this thing with two really good friends, and staying with them when in London. You can bet your bottom dollar that I'll be writing more about this project in due course.
If you're a bit of a hippy, then it's a psychedelic toy for the web. If you're would prefer a more pitch friendly description, then it's "social media aggregation" and "an inspiring front end to photos and music folk contribute to the web and share with their friends"

So much of my time has been eaten up by this - when I find myself with free time, I want to move this project forward, not blog. The motivation and drive is good for me, and I'm learning loads. And maybe I'll launch something successful! If at first you don't succeed...

The final significant reason why posts from me have been absent is Twitter (http://twitter.com/san1t1).

I love Twitter. Working from home a lot, being able to participate in conversations, serious and fun makes me feel I'm connected to lots of people on a lovely basis. It gives me a forum to discuss ideas with others. I can say what I'm doing and, to some extent, how I'm feeling about it, and, well, it's all good. Many others have written on the virtues of Twitter, I have little really to add. It would be like reporting on the Instant Messenger revolution. It's not news to me anymore.

So, less inspirational job, side project, twitter, and the distractions of a lovely lady have contributed to my failure to write here.

However.

I have promised some folk I'll write a post on the state of Cloud Telephony as I see it. I've been mulling things over, talking to folk (I have a nice little prospect out of that!), tweeting away, trying APIs, comparing business models, and I'm nearly ready to share my thoughts. That's a post to come.

I have two mixes to upload. (well, one and a half). They'll come soon too.

Mostly I'm looking forward to introducing the world to the thing I've been beavering away on for the last few months.

In short, expect more soon!

Saturday, 5 July 2008

Cloud or Mesh. Relational or Heirachical. Highly Distributed Logical Data Centres

As you know I'm really interested in how web applications are going to be architected as the internet age moves on. One of the dichotomies I'm trying to resolve in my mind is how data is stored with highly distributed applications.
What do I mean by distributed? For the purposes of this post let's just assume this means an application that is accessible from different devices, and is not bound to a single machine. Classically, this is a web site, or a client application that uses some kind of API to store data on the web.



Seems that the approach up until recently was to store your data on servers in a co-lo or dedicated data centre. Meaning that as an application developer and/or operations dude I have to scale my application based on physical architecture I know about. Generally as my app scales that means I need, eventually, to horizontally scale my database across more than one logical database. This is not straightforward, and even with the introduction of Hibernate Shards I really need to think about that. And this probably means I'm going to denormalize my database and have to work out how to synchronize some of the data I'm storing across these different logical DBs.


It strikes me though that with "cloud storage", things like Amazon SimpleDB, or Google's App Engine, that I may want to start with a herichacal database that is denormalized by default. No more Joins. I guess we've had this option for a long time with things like Oracle Objects, but seriously, have you as a developer ever tried to use that beast? Not fun. Google and Amazon (and soon Microsoft, with Sql Server Data Services) will have solved that "synchronize data in a denormalized logically partitioned database in many data centres" problem for me. So should I start by using that approach? Should I offload my database to these guys and just pay transactionally for what I do? This means a significant mindset change for me, I'm so used to drawing out relational diagrams, and I'm so used to using ORM or other mapping tools to abstract me from that. I need to change my mindset to think differently. But I guess the benefit of this approach is that from the beginning, provided these big guys aren't lying to me, I have an app that will scale, that will respond consistently, is backed up and disaster-resistant and that I only need to pay for on demand. This is Goodness.


Can't help thinking that this approach still requires a bunch of datacentres, the associated power and this, as an app developer, will have an eventual cost for me.


This brings me to Mesh, or Grid computing. If you're reading this, your PC is on right now, and, as I am using Blogger to host my blog, you're pulling data back from Google. Now, I don't have the worlds most read blog, I don't get thousands of hits a second, but still, for everyone who has read this blog there's a good chance that all this text is cached on their machines. And it's originated from the machine that I type this on.


You're familiar with swarm based file sharing right? Where somebody seeds a file, and then others leech it, and when they have downloaded it, they become another seed on the network? Indeed they can start sharing partial data as soon as they've downloaded it? There's no central store of this data, just some metadata that tracks where the bits are. This is how BitTorrent works (and indeed, how the BBC iPlayer works in offline mode, which is why they ask you to dedicate 20GB of hard drive space)


Why don't we have this approach for other forms of application?

I envisage a future where logical heirachical databases are partitioned across end nodes, such as the PC you're reading this on, and where your PC can take part in large map/reduce calculations, and that (best of all) you can have your PC and broadband for free, because application developers are renting space on it.

Google and Amazon are busy building out compute and storage in the cloud with all their data centres, for which they have to pay for power. Good for them. But, TELCOS already have the makings of a grid which could, with some clever software compete with all this, and at a much lower cost base.

In my house I have a BT Home Hub. This is a wireless router, and connects back to the internet through BT as my ISP. What's more, it's just an embedded Linux Device. Further, unlike my PC, I tend to leave it on the whole time. There's also enough space in it to throw in a hard drive, or some solid state storage. It could act as a node in this grid I'm envisaging. It could even negotiate with the PCs connected to it and utilise their storage and CPU.

BT could give this to me - for free - and then charge back to application developers the cost of storage and compute. Without the need to ever build data centres, and offloading the cost of all the power required to run server farms.

I understand that there are issues around latency, concurrency, routing, and a whole bunch of other problems to solve. But I reckon, that rather than attempting to replicate the approach that Amazon and Google and a host of others are doing, telcos should concentrate on taking their existing deployed Customer Premise Equipment assets and building out storage, compute and content distribution based on this.

What do you think? Am I in cloud cuckoo land again?

Tim Stevens

Tim Stevens
Work
Consume
Obey
Be Silent
Die