The Game is Made as You Play

After working as a web programmer for the last several years, the change to coding video games has made me aware of some crucial differences between these two jobs.
One of the main shifts I observed is in the moment of development where coding begins. One of my former bosses in the web development industry advocated that until design —both graphic and from the business perspective— is not finished, programming should not start. This statement, as any former professional partner has undoubtedly suspected, was frequently false, but at least that intention existed. In game programming there is no such intention—at the moment where coding starts, the game’s definition is much vaguer than that of a web application.
This is due to the fact that a web application must be, first and foremost, functional. In contrast, a video game also needs to be fun. Functionality is much more easily predictable than fun: a drop-down menu can work better or worse, but it just works. Adding the ability to block attacks to your game’s protagonist will work from a functional standpoint, but it may slow down combat, thus compromising the game experience.
As a result, there is no sense in wholly designing a video game before starting to code it, as it is entirely impossible to predict if the game will be fun. This process needs much more iteration than any other kind of programming. Prototyping is carried out systematically, and yet the proportion of programmed code that ends up in the final game is extremely low. For this reason, a game’s architecture leans dangerously towards spaghetti, and the programmer’s life passes within an eternal refactoring process. Game programmers need to accept that virtually any feature is subject to change. As a result, they need to prepare their systems in the best possible way to tackle its restructuration—except in the case that you hear “this game would work best as an online multiplayer.” If this happens, run and never look back.