arrow_back

Case Study

Initial Brief

In early February, a specialized manufacturing client went live with a critical integration between their custom Inventory Management System (IMS) and MYOB. While I oversaw the project’s strategic technical choices and resource coordination, the specific implementation was handled by an external subcontractor.

Almost immediately after deployment, phantom data issues began to surface. Staff reported baffling inconsistencies within the IMS:

  • Ghost Un-allocations: Orders where every single item was successfully picked and packed (allocated), yet the master order status remained stubbornly stuck on "Unallocated."

  • Zombie Allocations: Conversely, orders that had been explicitly cancelled or put on hold, with all items stripped of allocation, were still falsely showing as "Allocated" on the master record.

My architectural intuition pointed immediately to the new integration. The primary mechanism was a loop: the MYOB Integration (MI) would poll the IMS API for orders meeting specific criteria, determine if they needed to be created, updated, or deleted in MYOB, and then write back a log.

This polling mechanism was placing immense stress on the legacy Java backend. Comparing AWS EC2 load data from before and after deployment revealed a staggering 70% increase in average server load.

The data discrepancies strongly suggested a "half-baked" process: the item details were being handled correctly, but the process was failing just before updating the master record.

The hunt for the root cause was on.

What's the call?