Mopsa – RIP

Last Friday I was getting Mopsa one of Sam’s owls in from the weathering when we were both startled by one of the ferrets jumping at the bars.

I loosened my grip in that split second as Mopsa baited (tried to take flight) and she was free.

She flew up to the top of the soil stack and refused to come down even when offered food; one of the problems of keep them above flight weight.

She obviously wasn’t that comfortable there as every time she spotted one of the dogs next door she scrunched up and pretended to be a stick.

After about an hour of trying to coax her down she had had enough and flew off.

This was especially worrisome as she still had her jesses and leash attached, making her more prone to entanglement.

I put the word out on the internet and made sure local police were aware but heard nothing until this morning.

I received a call from a local man who said he had found her, but alas not alive.

Up until this moment I had held some small hope that she might come back to us alive.

I slowly walked up the road box in hand to collect her, tears in my eyes and trying not break down crying.

When I spoke to him he said that he had also seen a report of another owl missing in Diseworth (a village or two over) and could I be sure it was our owl?

Hope? Could she still be out there?

This was swiftly dashed.

She had managed to remove both her leash and one of her jesses, but this hadn’t saved her. She had died anyway and been found in a pond.

I now sitting here with tears streaming down my face at the crushing realisation that it was my fault she ended this way; that she died through my stupidity.

RIP Mopsa, you will be missed.

Mopsa

As I walk about the house I can still her her faint ter-wit echoing around.

2014-04-17 19.08.392014-03-20 19.10.57cropped2013-12-22 09.37.26

.UK Registry notice

I just received this notice, it doesn’t affect me directly, but is certainly interesting in that it may mean a lot of premium .uk domains being up for grabs soon…

Nominet, the .UK registry, has introduced a new Data Quality Policy. This policy requires that both the registrant name and address be verified against a third-party data source. For each domain registration or update, Nominet will try to validate the registrant name and address using their own data sources. If Nominet is not able to complete this validation, they will ask the registrar to have the data verified. Domains that do not complete the verification within 30 days will be suspended and can no longer be renewed or transferred.

Nominet will require registrars to enforce this policy starting September 22, 2014.

Thoughts on incubators

I recently saw a video that essentially said that we don’t need incubators anymore; that they don’t really give people what they need; that people can work from their kitchen because they have broadband at home.

I’ve worked for a number of startups, most in places where incubators didn’t have a good foothold, but a couple have gone that route.

I don’t think this is necessarily true that incubators are unneeded anymore. While anyone can get broadband pretty much anywhere, this is not the only thing that an incubator can provide.

An incubator should be providing support, access to potential investors, access to expertise among other things. The absence of any of these things at the right time will reduce the chances of a successful business growing out of your startup. They can increase the chance of success, how much depends on the incubator.

Growing without an ecosystem around you is certainly possible, but you are going to need to search for these needs yourself when you require them.

So is it worth it?

As with all things; it depends. Are you going to benefit from the tech heavy expertise that you will find in an incubator? Do the costs differences between space there and cheaper elsewhere justify the price? From what I have seen, often yes. Is it for everybody? Nope. If you rely on cheaper workers then an incubator heavy area (just like a tech heavy area) will often push the costs up quite markedly. Greater demand but not a massive amount of extra supply.

 

The fallacy of estimation (or why agile needs #noestimates )

Over the past month I have been talking to a few friends about their agile development projects. They are using different languages, with different size teams and most of them are tracking their projects using estimates of one form or another.

Whether they use points or attempt to guess at the time required for a task, they seem to be wrong a lot of the time.

It is rare that we have full specs or truly understand the problem at the start of a project (or even a sprint). We don’t have enough information to make accurate guesses and if we spend the time to gather this information, then we are often taking away from more productive work.

We can try to average the guesses with calculations of velocity or apply fudge factors to the guessed times, but this doesn’t solve the issue. We are inherently bad at estimation.

Jeff Atwood said in the comments of http://www.codinghorror.com/blog/2006/07/how-good-an-estimator-are-you-part-ii.html

You’re saying that software estimation is impossible. I don’t think that’s true. It’s a very hard problem, but it’s not unsolvable. I think the main problem is most organizations don’t gather enough data from their past and existing projects (bugs, bug fix rate, function points, etcetera), so they’re starting from a blank estimation slate every time they launch a new project.

I personally don’t think that more data will help. I think it will just make the problem harder to solve. The real solution is really to do away with the estimates and just look at the work you are actually doing for feedback.

I think the only place where having no estimates falls down is when talking to clients. Clients like certainty on budgets and I don’t think we can change that. I think the only thing we can do is take a stab in the dark based breaking down the feature(s) into bite-sized tasks, the performance/skillset of your team, the type of work and your track record of work over the last few sprints. I just don’t think points and velocity add anything to the process.

 

10 Predictions for 2014

1. Bitcoin and other virtual currencies will become more widespread

There is a lot of momentum behind virtual currencies at present and while there is a lot of hype, there is some real progress being made.

I don’t think that it is any risk to normal bank and credit transactions any time soon, but the early adopters are certainly jumping onboard.

2. Bitcoin will come under attack from banks and other legacy parties

Banks and governments like the control they have over the money markets at present. Anything that threatens that is going to be highly resisted.

Maybe even to the extent of crashing the price of bitcoin a few times to dent confidence in the new virtual currency.

3. Overlay networks such as Bittorrent, Tor and I2P will become more popular

With concerns over both the lack of privacy and the centralised control that exists on the internet at present, bit torrent with it’s ‘Distributed Hash Tables’ (DHT) will be leveraged for more protocols. We already saw BT-Sync and a chat application using the DHTs for their own purposes, I blieve we will see more appear in the next 12 months.

We will also see a growth in both I2P and Tor in the next few months – perhaps in conjunction with virtual currencies.

4. Mainstream Social Networks will become less popular due to further concerns about privacy and government snooping

Facebook has been bitten a few times in recent months with regards to privacy concerns. People don’t like being the product being sold to advertisers. This along with concerns over government snooping will see people considering other distributed social networks. Diaspora hasn’t gained a decent amount of traction after it’s initial hype, but the idea is sound and we have seen some success with XMPP and it’s federation capabilities over the past few years.

5. Big data will increasingly be used in business and government

We are seeing a lot of growth in big data at present. With companies storing all their data, there is a big pressure to use it to do something rather than just sit there eating up storage.

6. The cloud will increasingly be used by individuals

We have seen a lot of growth in cloud storage and computing with businesses, but individual uptake has been low except where bundled with a specific application or device (ie. the iCloud). We will see more people using their own personal cloud storage as the year progresses – probably both proprietary options such as Dropbox and more generic storage options such as Rackspace Cloudfiles, Amazon S3 or Dreamhost Dreamobjects.

7. The maker movement will continue to grow

We are seeing a lot of growth in hackerspaces, maker-faires and maker-friendly devices such as Raspberry pi and Arduinos. I don’t think this will slow down –  in fact I think it will begin to really hit mainstream.

8. DevOps will go mainstream

Much like agile is a recognised development methodology in software engineering, DevOps it’s anti-silo operations (we can argue about definitions later) counterpart will start to gain traction as more companies see it as a way to gain a competitive edge.

9. DRM will make a small comeback

Like horror movie monsters that always need to come back for one last scare, DRM will make a resurgence. With the HTML5 spec embracing allowing DRM protection, many publishers will attempt to use it in the vague hope that it will make their data secure.

This time around, with the wide variety of devices on the market, DRM will be harder to push on people. Not everyone uses Windows to consume content now, many use one of the numerous tablets around and compatibility of apps and plugins is sketchy at times.

So while DRM will continue to be used, users will push back as it breaks web applications and sites.

10. I will continue to have this nagging feeling that I should be doing something more with my life

The past couple of years I have been attempting to improve myself and my position in life. I am succeeding by some metrics, but I still have this nagging feeling I should be doing something else, something greater than myself – I’ll probably still have it this time next year.

How much is too much?

I’ve worked for a number of start-ups and so far have seen about half of them fail. I don’t see this as a personal judgement, more about it being the nature of start-ups.

The failures generally aren’t about the product – although some did suck big-time. It is more about cultural failings.

It is common for people in IT to work long hours. We generally enjoy what we do and try to deliver what the business needs. We aren’t generally rewarded for this, or at least not well, but working harder isn’t the answer to making a business work.

I’m not saying there wont be crunch times when you need to get stuff done by a deadline, or an outage is customer affecting and it needs to be dealt with even if you have already put in a day of work. These are exceptions, or at least should be. You can’t ask people to give their all and not expect them to burnout.

I don’t hide the fact that about a year ago I had burnt-out. I had (along with a friend I brought with me) managed to turn around the infrastructure of the company I was working for in less than 6 months. This is hard to do, especially with minimal downtime, and the stress was hitting me hard and not just because of the long-hours. It is mentally draining to be responsible for so much change in so short a time.

It wasn’t just the stress from work that was the problem. There were a lot of other things happening in my life at the same time. Each of them taking a little bit of my time and attention and adding a tiny amount of additional stress to the mix.

When you burn out you need to deal with these stresses. You need to remove the causes as well as dealing with them far better – especially taking time to relax.

One of the most stressful things I found was commuting on the train every day. It is only an 18 minute trip on the train, but to even get to the train station, I need to get the 2 miles there from home… and once on the train, it was cramped with small aisles, many people trying to get their bags into and out of the racks and pushing to find a seat. All in all, not a relaxing environment.

Once I changed to the bus I found I wasn’t arriving at work stressed from the journey. Yes, it is a longer commute by about the same again, but it doesn’t stress me out like the train did.

Many of the other stresses are still there in one form or another and may continue to be so for a while yet, but I feel I have got a better handle on life.

Even now a year later, I still have the scars to show from burning out, but I still enjoy what I do and want to continue to work in the start-up realm.

Post DevOpsDays ignite talk post-mortem

It has been around one and a half months since I gave my talk on burnout at DevOpsDays – London and it has been an odd time for me.

The initial reaction after the conference was awesome. People told me that what I did took guts, that my talk was good and many of them had gone through the same thing themselves. It certainly boosts the ego a little. :)

After a week or so the momentum died down. I was still speaking to a couple of people about related topics, and still have a side development project ongoing about monitoring burnout, but it seemed that almost everyone that wanted to speak to me to congratulate me or to talk about projects and other things they were working on had already done so.

Then about a week or so ago something strange started happening; people who had seen my talk or the video afterwards started to buck up the courage to speak to me about they current problems with burnout. This I hadn’t foreseen.

While I have been through burnout, I am not an expert on the subject, but I will listen to people. I guess people are seeing me as someone sympathetic to the problem and perhaps might have some insight and impartial advice.

It is a hard position for me, especially if I know them personally. How do you break it to someone that they have a number of options, but their best might be to leave their job and all the security that it comes with? or worse still that if they don’t deal with the issue they are probably going to burnout and spiral downward and probably end up sacked anyway.

I suppose at the end of the day, all I can really do is be supportive and spell out the options; avoiding advising on any particular path. I just wish I could help people incurring this issue more. :(

 

Banks 2.0

Just a brain dump after a conversation with a friend:

Friend: How widespread do  you think this will become?
Friend: i.e., in our 60’s, do you think we’ll be able to buy  groceries with BTC ?
Friend: or something similar
Mike : bitcoin will die in a year or two… something else will replace it…
Friend: you think!?
Mike : the problem with BTC as it stands is that it is being hyped too much…
Mike : it is a bubble…
Mike : but you can’t tell when it will burst
Friend: i’ll agree
Mike : it might be tomorrow… it might be a year from now…
Mike : but it is paving the way for what will come afterwards…
Friend: just getting really tired of USD
Mike : and it will usher in a new world where many virtual currencies all co-exist…
Mike : where the actual currency it is stored in is transparent to you, since your bank balances them… buying and selling to try to make the best return for you…
Friend: i doubt that..the bank would never give you anything good…but i’m cynical about banking
Mike : you will probably see a single value… a USD equiv most likely… that you own…
Mike : which will fluctuate day to day as exchange rates change and banks buy and sell your currencies in the background
Mike : these aren’t traditional banks… these are banks 2.0

Will there be risk? Yes, we are talking about systems where people can game them, where hacking will be a serious risk, where anonymity is valued.

We are also talking about a system that the traditional banks will want to avoid; that they will pressure governments to oppose and regulate.

I believe that there will be at least two categories of these banks.

  • Banks in the real world, with regulations and government cooperation – your money will be mostly protected, but also tied to your identity. You can’t hide from tax here…
  • Virtual Banks; Completely online. Nothing ties them to the real world directly. No hard currency is present. They simply balance the virtual currencies received and sent with the virtual currencies elsewhere. They balance risk around the virtual banking network based on their own tolerances of risk with their own policies. Their reputation is built over time and you can choose not to do business with them if you choose – but we do need to build a workable reputation system, or support escrow payments between currencies (multiparty transactions in bitcoin are a good starting point).

By making these virtual banks small – probably just a few users, the risk itself is reduced. This is nothing like a real world bank crash – think Iceland or the recent bailouts, but there is risk. This risk needs to be minimised, but there is nothing stopping you spreading your cash around. Your wallet software should help here.

We need to get this conversation started. What am I missing? Anyone interested in working towards this long term? Thoughts?

 

Isaikai0Isaikai0

Pricing for use-cases

It is very common to see mismatches in pricing for significant demographics in your audience.

Sometimes these will be down to technical implementation issues, but others seem to be a lack of understanding of who your customers are.

I’m not singling them out, since I do believe it is an accounting/technical issue, but on my daily commute I pay for a 28 day ticket – like the majority of other people that commute.

Also like the majority of other commuters, I get paid monthly and since that there are an average number of days in a month is 30 and a fraction (365/12) during the course of the year this gets out of sync.

I don’t believe there is any malice involved, I believe it is a simply an issue of wanting a fixed number of days for a ticket to simplify accounting and adding it to the machines, but it does cause inconvenience when two billing cycles fall in the same month.

Most of the time this is down to a misunderstanding of who your customers are. You want to make giving you money as painless as possible as if you don’t competitors will.

Scaling Feature-switches

I was recently helping out a friend’s company to scale out. This details some of the issues that were faced when scaling out the code for the feature switches.

Once your applications get big, being able to toggle feature-switches on and off to enabled and disable features or change things in real time is a great advantage.

If you are getting 10 page requests per second across 2 servers and you have 10 feature-blocks on the page then 200 lookups per second can be easily accomplished via Redis or Memcache directly. The per page penalty shouldn’t be massive if your server isn’t too removed from your application servers – typically 10ms assuming my benchmarks are within the right ballpark (60,000 GETs per sec and around 1ms round-trip time)

This becomes a bigger issue when you are looking at many more servers, feature blocks and many more pages per second. The Redis server becomes a bottleneck and the round-trip time means that you need many more threads to do the work.

To this end, I redesigned the standard ‘gatekeeper’ code that was used to scale more effectively.

They used to use APC for op-code caching and now use Xcache. Both support storing variables in memory for blazingly fast access. This is great, it gives great speed, but it is a pain to set these from outside of the web server – ie. from CLI.

So, what we ended up doing was using these for storing the feature flags for a short time after grabbing them from the key/value store (Redis was used). At 10 second cache time and 100 requests per second, this dropped the number of requests against Redis by 3 orders of magnitude.

A second issue was that as the application was scaled out across multiple data centres, we had multiple Redis servers for the feature flags- one for each cluster. Distributing the changes to the flags was a bit of a pain. We settled on using a RabbitMQ fanout exchange and a worker sitting on the Redis server itself to listen for global updates and then push the changes into Redis. This was the most fragile part of the setup, but worked well for us.

RabbitMQ supports server-side keep-alives, but our worker code was a pain to change to accommodate client side keep-alives. This meant that a partition in the network would see RabbitMQ close the connection but the client not being aware of it.

A second worker was used to check the first worker (and other processes) for updates and restarted it if it hadn’t responded for a while either to a heartbeat or a real message.

This gave a very scalable solution with not a massive amount of overhead.

I hope to have a reimplementation of the architecture used very soon and will make it available on Github when working.