Salvage

“Salvage” applies to Legacy code has that has developed problems completely preventing it from operating. It may be tempting to write the code off as useless in these circumstances, or opt for a complete rewrite. However, rewrites are often far less simple than they seem, and can result in escalating costs and timescales, and even result in entire projects ending up abandoned.

On the other hand, careful investigation may reveal the fundamental problem to be something simple. If it ran before, it can’t be too far from running again. Ugly code and mysterious error messages may make the code appear inextricably tangled, but this may be more of an issue with understanding, rather than an insurmountable problem with the architecture. Your code may not be optimal, but it probably doesn’t need to be – it just needs to be functional.

This is where our expertise and experience in dealing with legacy systems comes in. We can help get to the root of your problem much faster, saving you time and money – so you can focus on moving forward rather than being stuck trying to reinvent the wheel.

Problems due to environment change

If your system was working fine until yesterday when it suddenly choked, then the most probable cause is some kind of environment change. Processes’ requests for data are suddenly returning empty, or with a response in an unexpected format. This can happen in several different ways – some examples are:

  • data is being taken from a website which has recently changed
  • permissions to a third party API or service have been revoked
  • a third party API or service has changed format, or been discontinued
  • a change in third party software which is being relied upon (e.g. automatic updates, upgrades to the browser)

Issues arising due to environment change can often be fixed with only a small modification to the code. If you found your system stopped working suddenly then there is a good chance we can help – get in touch with us straight away by email at for a diagnosis and cost estimate.

Case Study: Amazon Affiliate Reseller Website

Developer Evaluation

[This] is a small website mostly focused on books, ebooks and souvenirs… The main purpose is custom searches of Amazon for books and other products. However this functionality recently stopped working, with an error message showing in place of the Amazon results. Investigation has revealed this is almost certainly due to the associated affiliate account having been deactivated. Thus it is believed this problem can be fixed simply by registering a new affiliate account and changing the credentials in the several instances where they are specified in the code.

Steps taken & Time Breakdown

  •  Registered new Amazon Affiliate Account
  •  Created new Affiliate Access Token
  •  Changed multiple occurences of credentials directly in code
  •  Registered new Amazon Affiliate Account

Statistics

  • Manpower: 1 Developer + 1 Supervisor
  • Time to Completion: 1 Working Day
  • Server Downtime: None
  • Cost Per Hour: $45
  • Total Developer Hours: 2
  • Total Migration Cost: $90

Result

Successful Fix

Accidental or Malicious Damage

Unfortunately the weakest link in any system is often the humans that are involved! No matter how carefully you set up access rights, or pay attention to security, humans can still manage to find ways (accidentally or otherwise) to damage your system. Such damage often involves deletion of code or data, and the fix can range from being a minor tweak to something much more involved. What is needed will depend on how your platform is configured, what backups have been made and how quickly you need to get up and running again. Either way we suggest getting in touch at the earliest opportunity.

Problems due to technical debt

“Technical debt” is a phrase gaining popularity, which describes the gradual decline in the quality of code due to the accumulation of cut corners. Understandably businesses are keen to release new features as early as possible, and so the emphasis is often on speedy delivery rather than code quality. This can work well for a while, but if enough care is not taken for a sufficiently long period of development the result can be processes that become unmanageable or effectively grind to a halt. Problems due to technical debt are unfortunately very common in software, due to management typically focusing on short-term objectives while not having the technical knowledge to critique the code being produced.

Code that has become completely out of action due to technical debt requires two phases to correct. The first phase is to implement the minimum change that is required to get the system operational once again. Once this is done it may be tempting to abandon any further work, and try to go straight back to business as usual. This is something we would advise against. With systems suffering from fundamental infrastructure issues, a quick fix is likely to ultimately just add to the pile of problems, and a short time down the road you will probably find yourself in a similar position.

The second phase begins with a more detailed analysis of which areas are likely to cause the most headaches long-term. We want to avoid a complete ground-up rewrite where possible as this usually works out far more time-consuming and expensive than expected. Addressing the most major issues in your system may be sufficient to add several years on to its life-expectancy, without the full cost of a rewrite.

Normally this second phase can be achieved by annexing the most troublesome functionality item by item and refactoring only these areas. The time and cost required is unique to each specific case – please get in touch by email for a free initial evaluation and cost estimate.

Case Study: A reverse proxy-based website whitelabeling service

Developer Evaluation

[This] is a service which “whitelabels” certain websites, by filtering the proxied response so logos and other identifying features are replaced… The current condition of the project is non-operational due to multiple issues with the filtering code, database access and user interface. The client describes having battled with these for a number of weeks while receiving complaints before finally announcing the service suspended… Inspection of the codebase reveals several areas where the underlying architecture is problematic. The most significant appears to be the database abstraction layer which is complex, inconsistent in structure and input/output are not in a consistent format… Factoring out the abstraction layer and replacing with direct SQL calls to the database should resolve much of the problematic functionality… There is also the issue of some data being stored in JSON format within the database fields, making extraction a complicated 2 step process and searches over database entries difficult. A restructuring of the database tables in tandem with the suggested SQL changes will resolve this, but may be time consuming and will need to be tested carefully…

Steps taken & Time Breakdown

  •  Cloned project to fresh dev environment
  •  Created test site to whitelabel (previously absent)
  •  Resolved 3 immediate points of failure
  •  Pushed modifications to production and restarted service, ending downtime
  •  Replaced abstraction layer calls with SQL queries module by module
  •  Cleaned up hardcoding and replaced with config directives
  •  Designed and implemented corrected database schema on isolated server
  •  Developed and tested process to port data to new schema format
  •  Ported data and rerouted app to new schema
  •  Replaced calls to the old data format module by module
  •  Added a skeleton unit test framework
  •  Ran tests

Statistics

  • Manpower: 4 Developers + 1 Supervisor
  • Time to Completion: 4 Weeks
  • Server Downtime: 12 Hours
  • Cost Per Hour: $45
  • Total Developer Hours: 328
  • Total Migration Cost: $14760

Result

Successful Fix