How quickly and completely your hosting environment recovers from going down depends on whether disaster recovery is a tested, integrated system or a line item on a hosting proposal.
SoBold’s dedicated hosting infrastructure is built around five components:
- Hardware redundancy with automatic failover.
- Geographic disaster recovery tested every six months.
- Backup restorability verified through regular full restorations.
- 24/7 monitoring with structured incident response.
- SLA accountability with financial consequences.
What the infrastructure looks like
Typically each SoBold client receives a dedicated virtual server on a private cloud with guaranteed compute and memory. That “performance isolation” means one traffic spike or security incident stays contained; resources are guaranteed rather than shared across a multi-tenant platform.
Our primary data centre, based in Cambridge and operating on 100% renewable energy (Green Web Foundation certified), houses paired physical servers in separate racks connected to different network switches, with real-time replication across a private network. Resilient power infrastructure, including fuel contracts for indefinite generator operation, supports continuous availability.
Server resources are “proactively monitored” and scaled as usage patterns change, maintaining headroom before performance degrades. If a processor, drive or network component fails, automatic failover at the hypervisor layer absorbs it; the engineering team receives an alert, replaces the hardware, and service continues on the secondary. In over 20 years of operation, the primary platform has never had an extended outage.
Geographic DR and tested failover
Hardware redundancy handles component failures within a single facility. A separate category of risk takes the entire data centre offline: power grid failure, flooding, fire, upstream network severance.
A geographically separate DR platform in a different UK data centre region replicates the primary environment continuously. Tested failover brings end-to-end recovery time, including DNS propagation, to under 15 minutes. Failover procedures are exercised every six months under realistic conditions, confirming replication is current, the secondary matches production configuration, and recovery completes within contractual timelines.
The restorability question
Most hosting providers include daily backups as standard. The more practical question is whether those backups produce a working recovery.
The backup approach retains daily snapshots for a minimum of four weeks across local and geographically remote storage. The commitment that separates routine backup from operational readiness is “guaranteed restorability”: regular verification that a full restoration completes successfully within SLA timelines and the restored environment matches the live site at the point of snapshot. Verification includes testing restore time under realistic data volumes, not just confirming the snapshot file exists.
From hosting transitions we’ve managed, backup arrangements amounting to automated snapshots with no tested restoration process are common across the enterprise hosting market. The backup runs on schedule; whether it produces a working recovery under time pressure is a question most organisations discover the answer to reactively.
Monitoring, response tiers and SLA accountability
24/7 monitoring covers server performance, network conditions and security events, with alerting within five minutes of anomaly detection. An emergency phone line connects directly to an engineer; most issues are identified and resolved before they reach client-facing systems.
A structured severity classification ties resolution urgency to business impact. Critical incidents (P1) carry a 15-minute response window and two-hour resolution target, available around the clock. High-priority issues (P2) target 30-minute response and four-hour resolution during UK business hours.
Medium and lower-priority issues (P3, P4) follow defined resolution windows from next-business-day through to five business days. The framework is “commercially aware”: a database degradation affecting checkout during a campaign launch receives a different response priority than a staging environment timeout.
Infrastructure SLAs target 99.9% data centre availability, with defined service credits when thresholds are missed. The 99.9% figure applies to the data centre and private cloud infrastructure; the website application layer is measured separately, and conflating the two produces misleading provider comparisons.
Security as disaster prevention
A large share of hosting disasters, from ransomware to data breaches to exploited vulnerabilities, are preventable through structured security management.
Operating under an ISO 27001 information security management framework, with a 100% pass rate across data centre audits, means these activities run as documented, auditable processes with clear ownership.
Firewalls operate at the hypervisor level, with access controlled through MFA and defined bastion points. Automatic security patching, intrusion detection, rootkit scanning, SSL lifecycle management and continuous vulnerability assessment each reduce the available attack surface.
Across the hosting environments we manage, the incidents that reach the DR layer are overwhelmingly ones that security processes couldn’t have prevented: hardware failures, upstream network events, third-party outages.
SoBold’s dedicated hosting is built around the architecture, monitoring and recovery processes described here. If your current hosting leaves questions unanswered on any of these points, that’s a conversation worth having.
