Interesting opinion piece from TechCrunch on IT

Started by CountDeMoney, January 31, 2015, 08:55:04 PM

Previous topic - Next topic

CountDeMoney

http://techcrunch.com/2015/01/31/three-reasons-why-your-software-is-so-far-behind-schedule/

QuoteThree Reasons Why Your Software Is So Far Behind Schedule
Posted 11 hours ago by Jon Evans (@rezendi), Columnist


When not opining here on TechCrunch I'm a software engineer for the fine folks at HappyFunCorp (1) and I'm occasionally called on to diagnose and fix projects that have gone horribly wrong (2). The more I do this, the more I notice commonalities among problem projects–"antipatterns," if you will. Here I give you three more of my ongoing list of such. Names have been changed to protect the guilty.

1. Onboarding Time == Technical Debt

Technical debt is not always a bad thing, but if you accrue too much of it, it will kill you. When under schedule pressure, or when new devs keep coming onto and going off a project, people tend to build new software subsystems and connect them to the old ones Rube-Goldberg style, instead of doing it right. It's like turning a family home into a cantilevered skyscraper one room at a time, and waiting with dread for an earthquake to hit, instead of razing it and pouring a new foundation as you should have.

But sometimes taking on short-term technical debt is the right thing to do. The real problem with technical debt is that it often lurks off the metaphorical balance sheet: it's hard to measure, especially if you're non-technical. What I've noticed of late is that there exists an excellent proxy measure for a project's technical debt: the onboarding time for a new developer.

So ask yourself: how long does it take a new dev to get set up and start pushing good new code to the repo? In many cases the answer should be an hour or less. No, really. "git pull", "bundle install", "rails s" and away you go. Or "git pull", "pod update", open the workspace in XCode, hit Command-R, and boom. It's been some months since I did any Android work, but Android Studio ought to be comparably easy.

But, I hear you sputtering, my environment is very complicated! We have virtual machines and multiple databases and virtual private networks! Systems talking to systems talking to systems! No way we can get a new dev set up in an hour! Uh-huh. Facebook's pretty large and complicated too, you know ... and Facebook engineers famously write real code on their first day and push it to the live site their first week. If your new developers are spending hours wrestling with configuration and environments, you have probably run up more technical debt than you care to think about.

2. The Test Suite Sweet Spot

Obviously you need to write, and run, tests for your code. And in an ideal world, you would have up-to-date tests for all of your code, which run automatically whenever you commit a change to the repo. Unfortunately, in testing as in life, the perfect is often the enemy of the good. It's amazing how often I've encountered projects with elaborate test suites that have been hopelessly broken for months and/or take forever to run.

Developers write tests but don't run them regularly, so they begin to fall out-of-date, and schedule pressure means that fixing the tests is non-critical whereas getting the new release out is critical, so the vicious circle compounds and the test suite decays into uselessness. Or–more rarely–test-driven development becomes test-centered development, and actually Getting Stuff Done takes a back seat to writing ever more and more elaborate test code, the refactoring of which takes so much time that development progress gets slower and slower.

There are costs to maintaining a huge and complex test suite; after you refactor code, you may have to either refactor your tests, which takes time, or let them break, which (ultimately) takes even more time. If your organization / management / development pressures are such that keeping all your tests up to date isn't a valid option, and you can't alleviate those pressures, then it's actually better to shrink your test suite. I know that doesn't sound appealing. I don't like it either. But a smaller test suite you actually run and maintain is much better than a larger one you ignore.

3. Please Just Stop Running Your Own Servers Already

Big Company problem projects tend to have something else in common: they're still running their own servers. No AWS or Google Compute Engine for them, much less Heroku or Elastic Beanstalk or App Engine. They have their own machines. And their own sysadmins. And their own homegrown processes for patching and updating and deploying code. And their own incredibly paranoid security, which often means "no developer may lay hands on the production machines," which often makes it pretty hard to get anything done.

Big companies need to ask themselves: are our sysadmins better than Google's or Amazon's? And since the answer is probably no, the conclusion is: why don't we let the experts handle it? (To be fair, they often are better than those of some cloud hosts — eg I've never had a good experience with Rackspace.) Yes, there are downsides. Yes, it means a loss of control. It's not a panacea.

But unless your circumstances are truly exceptional, a full cost/benefit analysis usually points very firmly towards moving your servers to The Cloud. Back in the early days of electricity, every factory had its own generator, and many protested loudly at the loss of control inherent in this whole newfangled "power grid" notion. But there's a reason the grid succeeded, and it's the same reason the cloud is succeeding today. Don't get stuck in the twentieth century. Stop making the old mistakes. After all, there are so many bold new mistakes to make.

----------------------------------
(1) Recently profiled on video here, to my great surprise. (Nobody will believe this, but despite my TC-columnist gig, I had no idea this was happening until the week it happened.)

(2) Usually either internal projects at large clients, or startup apps built by a cheap third-party dev shop. There is some truth to the notion that you get what you pay for.

Monoriu

We just terminated a large software development project.  The contractor simply failed to deliver.  The reason is that they miscalculated the amount of man hours required for development, so their contract bid price was too low.  They won the contract, only to realise that doing it meant losing tons of money.

Iormlund

The same happened at the project I've recently been made responsible for. The software contract was awarded to a company that bid 3 times less (!!) than the rest. Now they are effectively bankrupt. And so are those who hired them.

Zanza

I work with enterprise software and the sheer amount of money that has already been invested in building the complexity we currently have means that it is incredibly hard to do first point. Rebuilding a system that has been developed for 20+ years and integrated with dozens or hundreds of other systems without breaking operations is hard. The second point is also hard as there is usually not just one point where you need to test - the real problems come up when you test processes across multiple systems. The third point is not such a biggie as we have already outsourced data center operations to professional IT service providers.

CountDeMoney

Quote from: Zanza on February 01, 2015, 10:45:48 AM
The third point is not such a biggie as we have already outsourced data center operations to professional IT service providers.

Interesting, as I thought the third point would be the most difficult of all three, but I guess it also depends on the industry.  I also never see it happening in most organizations, simply over the issue of control.  In a shared services environment, I just don't see IT ever relinquishing control over anything like server farms or DC operations, ever. 

Zanza

There is no general prohibition to move stuff into the cloud in my company. However, buying whole software solutions often fails due to integration issues. There are also massive concerns to move stuff into clouds that are physically located in the US due to weak data protection laws there.

Siege

This guy work for one of the big ones, right?
Corporate attack against small business and free market solutions!!!


"All men are created equal, then some become infantry."

"Those who beat their swords into plowshares will plow for those who don't."

"Laissez faire et laissez passer, le monde va de lui même!"


MadImmortalMan

Quote from: CountDeMoney on February 01, 2015, 11:12:01 AMI just don't see IT ever relinquishing control over anything like server farms or DC operations, ever.

Oh no, there are way too many benefits to that. I couldn't wait to get that stuff out of the house.
"Stability is destabilizing." --Hyman Minsky

"Complacency can be a self-denying prophecy."
"We have nothing to fear but lack of fear itself." --Larry Summers

CountDeMoney

Quote from: MadImmortalMan on February 01, 2015, 08:41:48 PM
Quote from: CountDeMoney on February 01, 2015, 11:12:01 AMI just don't see IT ever relinquishing control over anything like server farms or DC operations, ever.

Oh no, there are way too many benefits to that. I couldn't wait to get that stuff out of the house.

I seen an increasing theme the last year or so in job interviews I've had, where a potential employer has asked me about my Microsoft Server experience as it relates to these PSIMs, in addition to the front end and edge stuff.  I have to say no, I don't:  in the last couple large organizations I've worked for, the servers always belonged to IT, and Thou Shalt Not Touch IT's Servers.  I may have owned the application--hell, I may have even purchased the servers--but do not touch.   Submit a ticket.  Give us your internal charge back number.  Have the integrator call for an appointment with IT for the patch.  I'm pretty sure, with all things being equal with other candidates, not having any operational experience with MS Server for the physical security systems I am proficient in has locked me out of certain positions because IT needs to be the most important people in the company.
Like you said, there are too many benefits to that.

Grey Fox

Quote from: CountDeMoney on February 01, 2015, 11:12:01 AM
Quote from: Zanza on February 01, 2015, 10:45:48 AM
The third point is not such a biggie as we have already outsourced data center operations to professional IT service providers.

Interesting, as I thought the third point would be the most difficult of all three, but I guess it also depends on the industry.  I also never see it happening in most organizations, simply over the issue of control.  In a shared services environment, I just don't see IT ever relinquishing control over anything like server farms or DC operations, ever.


My company had that issue midway thru the '00s, IT was being an ass about every little things especially after the bubble burst. To fix it, IT was integrated into R&D. That thru a wrench. Sadly since the 2012 Teledyne take over it has devolved back to being a nightmare :(

Do you know our website db only updates every 30mins & we have a serial key producing algorythm embedded on it. Want a new key? gotta wait 30mins or hope the update is happening within the minute.
Colonel Caliga is Awesome.