Posts

  • Managing Software Complexity

    When I went to university and started learning to program I spent a lot of my free time making games from scratch, I wanted to apply some of my physics knowledge in making game engines. I started off with making a space simulation game and implemented movement based on Newton’s law of motion, gravity was affecting the ships and I made them able to shoot bullets. However, the engine proved to become more difficult to program the more features I added, I wanted collision detection, support for multiple players etc. and finally ended up in scrapping it all together due to design flaws and the engine being way too complex to keep together without introducing bugs all across the board. I did not give up, though, and ended up restarting from scratch multiple times bringing knowledge of my mistakes to the next try. My point being that reducing complexity in software design is not something only the most senior software architects need to address, it is among the first problems a junior programmer has to learn ways to tackle as well. Continue reading...
  • Are Mainframes Still Relevant?

    I am in an interesting situation at work where I face two realities, on one hand I work with a customer which entire IT infrastructure is based on an old IBM mainframe and on the other hand I am involved in a huge project called Ciber Momentum which purpose is to accelerate application transformation to modern languages. In other words, to help get rid of legacy mainframe applications. Me and my team have discussed several times how important it is for our customer to modernize their codebase, since every day all integrations and development we make increase the technical debt the company accumulates. They are making the migration to a new system harder by the day and the reality is that soon enough they have to make the transition in order to stay relevant. Perhaps this stubbornness is what makes established companies to become surpassed by new startups, with modern codebases and less technical debt enabling them to change more rapidly. Another problem is that traditional modernization projects are unavoidably expensive with no short term return of investment, which by no means helps the management (which necessarily does not understand the technical reasoning) hard to make the decision to launch such a huge project. Continue reading...
  • Artificial Intelligence Today

    Artificial Intelligence has been around since the beginning of computing and was founded as an academic discipline in 1956. Ever since then, the scientific community has thought that they are close to a real breakthrough every decade or so. Already in the 40’s research on neurology found out that the human brain is a network of neurons that fires in all or nothing pulses, and Alan Turing suggested that it might be possible to build an electronic brain. In 1951, a 24 years old graduate student, Marvin Minsky built the first neural net machine, also known as SNARC, which is referred to as the first artificial self-learning machine. Continue reading...
  • Skipping comments leads to higher quality code

    Some really bad code Continue reading...
  • Why everything you do should be on GitHub

    I started studying computer science in 2010 and noticed that some of the students were actively using GitHub for their version control needs. I finally created my own account in September 2011 and thought that I was horribly late to the party, I had used various SVN based solutions for my school projects by then and finally decided that I should switch. However, I already had my workflows for SVN and TFS and never completed the transition. Then I started to work full time within the .NET eco system and the company used a locally hosted TFS for version control. So I stayed put with my Visual Studio Online account and never thought more about it. Continue reading...

subscribe via RSS