Learnings part 2

This post will be a little shorter than the previous one, but I hope you still find some value in it.

3) People will be nice when they perceive things going their way.

Everyone loves good news. It lifts mood and makes them far more hopeful about the future. They tend to also be easier to get along with and more generous in time and money or other resources.

This includes you!

4) Things wont always go your way.

While the old adage ‘make hay while the sun shines’ still holds true. You should also be prepared for times when things aren’t going your way. Hold back some cash. Have a plan B (and plan Z) and just generally keep options open.

5) Great people will help you reach goals even when things aren’t going their way.

The most successful people I have ever met are not the people willing to stab someone in the back to get ahead; it is the type of person who sees you fall and goes out of their way to make sure you are okay.

This produces a lot of goodwill and it tends to come back to them many times. People remember the kind word when they are low and the helping hand when they are on the ground.

Sure, there will be people who try to take advantage of this, but they will be in the minority.

Is coding art?

I was recently asked this question and after lengthy contemplation I have to argue that as most people practice it no, it is not art.

I am not trying to take away from the great works of these people. Far from it. Being ‘art’ is just a label. It doesn’t make it better or worse than anything else.

So why am I stating that most code is not art? It comes down to a few reasons:

  • The act of writing source code is creative, but it is not the finished work.
  • The finished work for most languges (except for raw machine code and some interpreted languages) is not the work of the coder but actually the result of the compiler, the coder generally can’t predict what exact form the output from the input will be, only the function.
  • The only form we can really judge the creator on is the user interface (and perhaps the resource requirements (64k code competitions for example)), but this is very different from putting pigment on canvas or other similar processes we easily define as ‘art’.
  • If we are only able to judge the creators code on the function and not the form, then it isn’t art. It is creative but not art as such.

This last points I think is the important ones. Building architecture is a hybrid of art (form) and engineering (function) and I think we should look at programming from a similar perspective.

Modern architecture uses a lot of computer assistance to design a building; from calculating loads and resource requirements to in some cases designing the actual form of the building itself with only minor tweaks from the architect. This is a very wide range and we accept that (although arguments still exist both ways).

So can programming be art? I think so, especially where the source code itself is the output. Take ‘Obfuscated Perl’ contents as an example. The code is the thing to be admired, the function merely a selection criteria. The ‘Brain fuck’ language is a little less clear cut as the language is still compiled, but the only real use is the challenge of writing or understanding the code. I suppose in this context it could be looked at as performance art, except that generally you are performing alone.

Logitech G510 Keyboard (GB) under Wheezy

My previous keyboard (a cheap and nasty replacement for an IBM Model M) started to die recently.

It wasn’t a massive failure, just some keys not registering first time – not useful when coding where accuracy is important.

So after looking through the options and talking to friends I realised that a great keyboard would set me back well over £100. So I looked for cheaper alternatives.

A friend suggested looking at gaming keyboards. There are a lot of cheap gaming keyboards on the market and most of them are complete crap. One name that has been around for a while and does have some good reviews is Logitech.

After considering the options I settled on a G510 (as there were some cheaper and pristine 2nd hand models on ebay going cheap).

20141009_162129

There wasn’t so much of an unboxing as an opening of the reasonably well-packed jiffy bag.

I liked what I saw. So I swapped it out with my old keyboard on my Debian Wheezy workstation.

Suprisingly it worked. Out of the box the media keys and normal keys worked absolutely fine. The display was fixed at g510 in blue and a blue backlight on the keys, but it was a good start. I could live with this level of function worst case.

I initially tried installing the g15daemon and apps. After solving the /dev/input/uinput issue in the initscript and getting a version of the library that supported the g510 we looked like we were getting somewhere. I hit a few stumbling blocks and tried the alternative option…

The Gnome15 project is suprisingly well polished. After adding the repo and installing their gnome-suite-fallback metapackage and rebooting (logout alone didn’t seem to get me all the way there) I had a fully working keyboard. I was suprised; happy but very suprised.

The notification area control app doesn’t have many functions, but it has enough to make it very useful.

Screenshot from 2014-10-09 17:52:40

The first tab controlled the backlight and whether the display cycles between the various enabled plugins.

Screenshot from 2014-10-09 17:52:43

The second controls the various macros (my media profile inherits the macros from the default profile and so has none of its own)

Screenshot from 2014-10-09 17:52:46

…and the third controls which plugins are enabled.

The way I am using it is to have a default profile that only enables a couple of plugins (primarily the pommodoro plugin for coding) and a second profile that I have with my media player displaying on the screen. This seems to work well enough for me to start with.

Overall I am happy with the keyboard. The build quality is good and the key-response while a little spongy is usable. I would certainly be tempted to go up the range a little to get something with better switches as funds allow.

Edited to add:

Low light use

Not massively bright, but certainly enough to code or game by. The monitors kick out a lot of light, but in a terminal session where the screen is mostly black, the backlight is certainly bright enough if you do need to look at the keyboard (like when I am one key out from the home positition).

20141009_183109 20141009_183418

 

Learnings

A few months ago I posted a list on twitter about 10 things I have learnt in my career; things that I wish I’d have known at the start of my career.

In the next few posts I will expand on these points.

1) Short term fixes will 90% of the time become long term production processes.

Much as we like to believe that a quick fix will be just that; that we will come back to it when we have time and fix it properly, it is rare we will ever find the time to do so.

This isn’t about intentions or anything else. It simply comes down to two facts:

  • This problem isn’t an irritating enough ‘itch’ any more and so you aren’t compelled to scratch it.
  • Once it is in production it becomes business critical and it becomes far harder to do anything to change it.

This ends up being an excellent argument to fix it right the first time. Alas, in this business driven environment it is hard to commit the resources to doing it right when ‘good enough’ will fit the business need.

2) You will never have full requirements, if you do they are out of date.

TL;DR version: Agile and similar iterative processes are the right way to do things. Business and technology changes too quickly.

When I learnt about Computer Science in the early 90s there was no agile. Waterfall was the normal method taught and people used it – and projects failed.

Not all projects; that would just be silly and not a total failure. Just enough failure to mean that projects didn’t fit the changing business needs particularly well.

With the advent of drag and drop UI builders and languages like VB (much that I hate the language) apps were quicker to develop and fairly consistant in their UI.

Extreme programming and agile was an extension to this. They didn’t have all the answers but they changed the model from upfront design to incremental improvement. You could put a mostly working app in front of a user in a few hours and the user could start to understand what you were describing.

This accelerated the pace of the IT industry. We now see an MVP as an integral part of most startup’s business plan and the only people left doing waterfall or similar development methods are mission critical application developers (flight systems for aircraft and spacecraft, process controllers etc.) and government contractors who are lagging behind the times by about 20 years – although it is nice to see this is changing.

2a) Design with change in mind

This wasn’t in the original set, but I think it is important.

If you design and build tightly coupled systems then sure, you will get maximum performance out of the system. The problem is that changes to the system become difficult; changes to part of the system end up requiring changes elsewhere as the interfaces and data structures change.

With this in mind I tend to recommend loose-coupling using well defined interfaces now. Using interfaces like REST or Message Buses and interchange formats such as JSON (and XML if you really must) you can avoid the need to make changes in multiple places as business requirements change.

Life and Work

I currently have just under a month left before I leave Synety. I haven’t yet got anything concrete lined up, so I am looking at options…

Do I go and work for another startup? Do I start my own startup? Do I go back and do consulting? Do I go corporate again? Do I look at something else?

It is a hard decision and one I am kinda agonising over at present. :(

Mopsa – RIP

Last Friday I was getting Mopsa one of Sam’s owls in from the weathering when we were both startled by one of the ferrets jumping at the bars.

I loosened my grip in that split second as Mopsa baited (tried to take flight) and she was free.

She flew up to the top of the soil stack and refused to come down even when offered food; one of the problems of keep them above flight weight.

She obviously wasn’t that comfortable there as every time she spotted one of the dogs next door she scrunched up and pretended to be a stick.

After about an hour of trying to coax her down she had had enough and flew off.

This was especially worrisome as she still had her jesses and leash attached, making her more prone to entanglement.

I put the word out on the internet and made sure local police were aware but heard nothing until this morning.

I received a call from a local man who said he had found her, but alas not alive.

Up until this moment I had held some small hope that she might come back to us alive.

I slowly walked up the road box in hand to collect her, tears in my eyes and trying not break down crying.

When I spoke to him he said that he had also seen a report of another owl missing in Diseworth (a village or two over) and could I be sure it was our owl?

Hope? Could she still be out there?

This was swiftly dashed.

She had managed to remove both her leash and one of her jesses, but this hadn’t saved her. She had died anyway and been found in a pond.

I now sitting here with tears streaming down my face at the crushing realisation that it was my fault she ended this way; that she died through my stupidity.

RIP Mopsa, you will be missed.

Mopsa

As I walk about the house I can still her her faint ter-wit echoing around.

2014-04-17 19.08.392014-03-20 19.10.57cropped2013-12-22 09.37.26

.UK Registry notice

I just received this notice, it doesn’t affect me directly, but is certainly interesting in that it may mean a lot of premium .uk domains being up for grabs soon…

Nominet, the .UK registry, has introduced a new Data Quality Policy. This policy requires that both the registrant name and address be verified against a third-party data source. For each domain registration or update, Nominet will try to validate the registrant name and address using their own data sources. If Nominet is not able to complete this validation, they will ask the registrar to have the data verified. Domains that do not complete the verification within 30 days will be suspended and can no longer be renewed or transferred.

Nominet will require registrars to enforce this policy starting September 22, 2014.

Thoughts on incubators

I recently saw a video that essentially said that we don’t need incubators anymore; that they don’t really give people what they need; that people can work from their kitchen because they have broadband at home.

I’ve worked for a number of startups, most in places where incubators didn’t have a good foothold, but a couple have gone that route.

I don’t think this is necessarily true that incubators are unneeded anymore. While anyone can get broadband pretty much anywhere, this is not the only thing that an incubator can provide.

An incubator should be providing support, access to potential investors, access to expertise among other things. The absence of any of these things at the right time will reduce the chances of a successful business growing out of your startup. They can increase the chance of success, how much depends on the incubator.

Growing without an ecosystem around you is certainly possible, but you are going to need to search for these needs yourself when you require them.

So is it worth it?

As with all things; it depends. Are you going to benefit from the tech heavy expertise that you will find in an incubator? Do the costs differences between space there and cheaper elsewhere justify the price? From what I have seen, often yes. Is it for everybody? Nope. If you rely on cheaper workers then an incubator heavy area (just like a tech heavy area) will often push the costs up quite markedly. Greater demand but not a massive amount of extra supply.

 

The fallacy of estimation (or why agile needs #noestimates )

Over the past month I have been talking to a few friends about their agile development projects. They are using different languages, with different size teams and most of them are tracking their projects using estimates of one form or another.

Whether they use points or attempt to guess at the time required for a task, they seem to be wrong a lot of the time.

It is rare that we have full specs or truly understand the problem at the start of a project (or even a sprint). We don’t have enough information to make accurate guesses and if we spend the time to gather this information, then we are often taking away from more productive work.

We can try to average the guesses with calculations of velocity or apply fudge factors to the guessed times, but this doesn’t solve the issue. We are inherently bad at estimation.

Jeff Atwood said in the comments of http://www.codinghorror.com/blog/2006/07/how-good-an-estimator-are-you-part-ii.html

You’re saying that software estimation is impossible. I don’t think that’s true. It’s a very hard problem, but it’s not unsolvable. I think the main problem is most organizations don’t gather enough data from their past and existing projects (bugs, bug fix rate, function points, etcetera), so they’re starting from a blank estimation slate every time they launch a new project.

I personally don’t think that more data will help. I think it will just make the problem harder to solve. The real solution is really to do away with the estimates and just look at the work you are actually doing for feedback.

I think the only place where having no estimates falls down is when talking to clients. Clients like certainty on budgets and I don’t think we can change that. I think the only thing we can do is take a stab in the dark based breaking down the feature(s) into bite-sized tasks, the performance/skillset of your team, the type of work and your track record of work over the last few sprints. I just don’t think points and velocity add anything to the process.

 

10 Predictions for 2014

1. Bitcoin and other virtual currencies will become more widespread

There is a lot of momentum behind virtual currencies at present and while there is a lot of hype, there is some real progress being made.

I don’t think that it is any risk to normal bank and credit transactions any time soon, but the early adopters are certainly jumping onboard.

2. Bitcoin will come under attack from banks and other legacy parties

Banks and governments like the control they have over the money markets at present. Anything that threatens that is going to be highly resisted.

Maybe even to the extent of crashing the price of bitcoin a few times to dent confidence in the new virtual currency.

3. Overlay networks such as Bittorrent, Tor and I2P will become more popular

With concerns over both the lack of privacy and the centralised control that exists on the internet at present, bit torrent with it’s ‘Distributed Hash Tables’ (DHT) will be leveraged for more protocols. We already saw BT-Sync and a chat application using the DHTs for their own purposes, I blieve we will see more appear in the next 12 months.

We will also see a growth in both I2P and Tor in the next few months – perhaps in conjunction with virtual currencies.

4. Mainstream Social Networks will become less popular due to further concerns about privacy and government snooping

Facebook has been bitten a few times in recent months with regards to privacy concerns. People don’t like being the product being sold to advertisers. This along with concerns over government snooping will see people considering other distributed social networks. Diaspora hasn’t gained a decent amount of traction after it’s initial hype, but the idea is sound and we have seen some success with XMPP and it’s federation capabilities over the past few years.

5. Big data will increasingly be used in business and government

We are seeing a lot of growth in big data at present. With companies storing all their data, there is a big pressure to use it to do something rather than just sit there eating up storage.

6. The cloud will increasingly be used by individuals

We have seen a lot of growth in cloud storage and computing with businesses, but individual uptake has been low except where bundled with a specific application or device (ie. the iCloud). We will see more people using their own personal cloud storage as the year progresses – probably both proprietary options such as Dropbox and more generic storage options such as Rackspace Cloudfiles, Amazon S3 or Dreamhost Dreamobjects.

7. The maker movement will continue to grow

We are seeing a lot of growth in hackerspaces, maker-faires and maker-friendly devices such as Raspberry pi and Arduinos. I don’t think this will slow down –  in fact I think it will begin to really hit mainstream.

8. DevOps will go mainstream

Much like agile is a recognised development methodology in software engineering, DevOps it’s anti-silo operations (we can argue about definitions later) counterpart will start to gain traction as more companies see it as a way to gain a competitive edge.

9. DRM will make a small comeback

Like horror movie monsters that always need to come back for one last scare, DRM will make a resurgence. With the HTML5 spec embracing allowing DRM protection, many publishers will attempt to use it in the vague hope that it will make their data secure.

This time around, with the wide variety of devices on the market, DRM will be harder to push on people. Not everyone uses Windows to consume content now, many use one of the numerous tablets around and compatibility of apps and plugins is sketchy at times.

So while DRM will continue to be used, users will push back as it breaks web applications and sites.

10. I will continue to have this nagging feeling that I should be doing something more with my life

The past couple of years I have been attempting to improve myself and my position in life. I am succeeding by some metrics, but I still have this nagging feeling I should be doing something else, something greater than myself – I’ll probably still have it this time next year.