Case Studies

Case Studies

We have helped a wide range of clients over the last 25 years and reasons as to the whys and wherefores of the loss of their data. To that end we have a broad knowledge of problems relating to data recovery as our case studies show.
Case Studies

Case Study 1 — iMac G4 Not Booting (Question-Mark Folder)

Issue (Client Impact)

An accountant’s iMac G4 failed to boot. Instead of reaching macOS, the machine displayed the classic flashing folder with a question mark (and intermittent “sad face”)—an indicator that the Mac could not locate a valid system folder or a readable boot device. The workstation contained 52 client datasets (Excel workbooks and Sage accounting files) required urgently.

Initial Triage & Diagnostics

  • Intake & Write-block: Drive removed from the iMac and connected via a forensic write-blocker to prevent inadvertent changes.

  • Drive identity: 3.5″ PATA/IDE HDD (typical for iMac G4 generation). No stable enumerate on first power-up; motor spun, but no ID over ATA.

  • Electronics check: Measured rail resistance and inrush current; 5 V rail abnormal. Visual inspection showed heat-stressed motor driver and a scorched TVS footprint.

  • Firmware access: Service-port access attempted; device would not respond—consistent with PCB/controller failure.

  • Conclusion: Electronic failure of the PCB; mechanical subsystem (spindle/heads) appeared sound (no abnormal noises, normal spin-up profile).

Recovery Actions

  1. Donor PCB selection & unique ROM/adaptives transfer

    • Sourced a donor PCB with matching board revision and microcode from in-house inventory.

    • Read the original board’s ROM/NVRAM (contains adaptive calibration data unique to the head-disk assembly).

    • Transferred ROM to donor PCB (hot-air rework) so that the donor electronics matched the platter/head adaptives.

  2. Controlled power-up & firmware sanity checks

    • After PCB swap, drive enumerated reliably in Device-Safe mode.

    • Verified SMART without allowing automatic background processes.

    • Locked background scan, disabled idle-time write/cache features.

  3. Hardware-assisted imaging (PC-3000)

    • Built a head map and confirmed no weak head behavior.

    • Performed multi-pass imaging (soft→hard) with tuned timeouts and LBA windowing to avoid repeated retries.

    • Imaged 100% of user LBAs to a sterile target.

  4. Filesystem & data extraction

    • Detected an HFS+ (Journaled) volume.

    • Replayed the journal, checked catalog/extent B-trees, repaired minor directory issues.

    • Extracted Excel and Sage data; verified Sage company datasets open properly and Excel hashes match expected totals.

  5. Delivery

    • As the client needed immediate access, we provisioned a secure download (encrypted link). Chain-of-custody and hash manifests provided.

Outcome

  • 100% data recovered.

  • Client resumed operations the same day using the secure download set.

Note: The “question-mark folder” on older Macs commonly maps to storage not detected / unbootable rather than OS corruption alone. Electronic failures on vintage PATA drives are frequent; success hinges on retaining the original ROM/adaptives when replacing the PCB.


Case Study 2 — Failed Rebuild on a Buffalo LinkStation (RAID 5)

Issue (Client Impact)

A Buffalo LinkStation that had served for years as an office file server suddenly disappeared from the network share list. The web UI only showed “Users” → “Local Users”; shares and volumes were missing. Client later disclosed they attempted a rebuild which failed at ~2%. The array held active departmental documents.

Array & Filesystem Topology (as Received)

  • Enclosure: Buffalo LinkStation (4-bay)

  • RAID level: RAID 5, 4 × 750 GB SATA HDDs

  • OS stack (typical): Linux mdadm layer with XFS or ext3 filesystem on top (confirmed during analysis)

Diagnostics

  • SMART screening: Two members exhibited reallocated/pending sector escalation and high UDMA CRC event counts.

  • On-disk metadata: mdadm superblocks indicated previous degradations and a recent rebuild attempt. Parity and data were out of sync (classic “write-hole” effect compounded by the failed rebuild).

  • Risk: Continuing controller-level rebuilds would likely amplify corruption by writing new parity over stale data.

Recovery Actions

  1. Per-disk hardware imaging

    • Used PC-3000/Atola to image each disk independently.

    • Soft→hard imaging strategy with head-map selection and skip/late-fill for unreadable regions.

    • Captured full images; flagged bad LBA ranges for later parity reasoning.

  2. Geometry discovery & validation

    • Extracted mdadm superblocks to confirm RAID 5 geometry (block/stripe size, rotation pattern, start offset).

    • Validated with stripe-alignment tests against known filesystem signatures (superblocks, inode structures).

  3. Virtual array reconstruction (no writes to originals)

    • Reassembled array in software using the disk images, not the hardware controller.

    • Where one member had unreadable sectors, computed the missing blocks from the parity of the remaining members (when possible).

  4. Filesystem repair

    • Mounted the reconstructed volume read-only; detected XFS metadata inconsistencies likely introduced during the failed rebuild.

    • Executed a metadata repair workflow (log replay, directory/inode btree checks), then exported user data to a sterile target.

  5. Delivery & validation

    • Supplied a 4 TB external HDD with the restored directory tree.

    • Provided a report with hash manifests and a list of previously unreadable LBAs (all covered by parity reconstruction, no user-file loss).

Outcome

  • Full logical recovery of the share set.

  • Client re-published the shares from a new NAS, using our copy as the seed.

Note: On mdadm-based NAS, if a disk drops and another is marginal, controller-level rebuilds can quickly convert a “recoverable” state into a parity/data inconsistency. Best practice is to image first, then reconstruct virtually.


Case Study 3 — HP Server: Post-Rebuild Boot Failure on RAID 5 (8+2)

Issue (Client Impact)

A factory file server (HP rack server) hosted ~2 TB of live data for ~100 staff on a RAID 5 array (8 data drives + 2 hot spares on a Smart Array controller). The server froze, was power-cycled, and reported one failed drive. The hot spare did not auto-promote, so IT manually initiated a rebuild to one hot spare. The rebuild reached 100%, but the server would not boot afterwards. Their MSP attempted recovery and referred the case to us.

As-Received Condition

  • 10 drives total (8 active members, 2 hot spares) delivered loose.

  • Controller logs unavailable; several disks showed recent power-on resets and re-allocations.

  • By the time the set reached our lab, two additional drives displayed degraded behavior—effectively 3 impaired members in the original 8-drive RAID 5.

In RAID 5, any period with >1 unavailable member causes irrecoverable stripes unless the failing members can still yield partial readable sectors during lab imaging.

Diagnostics

  • SMART & surface: Three drives presented distinct failure modes:

    • Disk A: escalating pending sectors and slow seeks (weak heads).

    • Disk B: read channel timeouts (probable preamp/head issue).

    • Disk C: controller/firmware timeouts under load.

  • On-disk metadata: Smart Array headers indicated recent configuration changes matching the forced rebuild.

  • Filesystem on top: NTFS on a single logical volume (validated later).

Recovery Actions

  1. Stabilisation & per-disk imaging

    • Isolated each member to a dedicated imager.

    • Head-map imaging on Disk A (skipping failing head ranges, back-filling later).

    • Adaptive timeouts & power-cycle strategy on Disk B to capture intermittent bands.

    • Vendor-specific quiesce on Disk C to limit background firmware tasks interfering with reads.

    • Result: High-coverage images for all eight members; bad-block maps retained.

  2. Targeted mechanical/electronics service (on the worst offenders)

    • Performed head-stack replacements on two drives exhibiting clear read-channel/preamp faults (using matched donors; unique ROM/adaptive data preserved).

    • One PCB driver issue corrected via donor board + ROM transfer to stabilise firmware access.

    • Post-service imaging improved coverage enough to mathematically reconstruct previously missing stripes.

  3. Array geometry reconstruction

    • Derived stripe size and parity rotation from controller metadata and content analysis.

    • Normalised capacity where one image reported a truncated LBA span (HPA/DCO artifact).

    • Built a virtual RAID 5 from the eight images; computed missing blocks in stripes where only a single member’s data was absent.

    • Where two members lacked the same stripe region, we merged best-of sectors from those partially imaged drives to reduce the net “unknown” to ≤1 block per stripe, allowing parity to resolve.

  4. Filesystem repair & extraction

    • Mounted reconstructed NTFS read-only; replayed $LogFile.

    • Repaired $MFT/$MFTMirr mismatches; validated ACLs and key shares.

    • Exported the full 2 TB dataset to new storage; generated hash manifests for acceptance testing.

  5. Delivery & advisory

    • Clean export provided with a concise incident/recovery report (timeline, drive serials, imaging coverage, irrecoverable sector list = none impacting user data).

    • Recommended: controller firmware update, removal of marginal drives, and pre-production resiliency test before going live.

Outcome

  • Complete logical recovery with no material file loss.

  • Client restored services on replacement hardware using our dataset.

Note: It’s common for a stressed array to experience secondary failures during or after a rebuild. The correct lab approach is per-disk imaging first, then virtual reconstruction—never rebuild on the originals.


Closing (Service Notes)

  • For urgent cases, we operate a critical path workflow that prioritises stabilisation → imaging → virtual rebuild → extraction with engineer-to-engineer communication.

  • For shipments: place the drive(s)/NAS in anti-static bags, bubble-wrap, and a padded box, include your contact details and incident description. Avoid further power-ons.

Why Choose Leicester Data Recovery?

  • Fixed pricing on recovery (You know what you are paying - no nasty surprises).
  • Quick recovery turnaround at no extra cost. (Our average recovery time is 2 days).
  • Memory card chip reading services (1st in the UK to offer this service).
  • Raid recoding service (Specialist service for our business customers who have suffered a failed server rebuild).
  • Our offices are 100% UK based and we never outsource any recovery work.
  • Strict Non-disclosure privacy and security is 100% guaranteed.