Blog

/

Data Center Physical Security: Access Control, CCTV, and OT Network Segmentation

April 19, 2026

Data Center Physical Security: Access Control, CCTV, and OT Network Segmentation

Data center physical security: EN 50600-2-5 classes, NIS2 rules, CCTV tamper alerts, IEC 62443 OT segmentation, and factory-integrated wins.

Data center physical security is the layered system of barriers, access controls, surveillance, and environmental monitoring that prevents unauthorized people from reaching IT equipment, and that detects tampering when they try. It is governed in Europe by EN 50600-2-5, which defines four protection classes across site, building, and room boundaries, and by the NIS2 Directive, which as of October 2024 classifies data centre service providers as essential or important entities with mandatory access control, network segmentation, and incident reporting obligations. Physical actions account for under 5% of confirmed breaches according to the Verizon 2025 Data Breach Investigations Report, but the consequences when they happen are outsized: a single cabinet-level compromise can bypass every firewall you own.

This post covers why physical security now lives on three planes instead of one, how to build layered access from perimeter to cabinet, what CCTV actually needs to do beyond recording, how OT network segmentation works in a modern data center, the protocol hardening rules that most operators get wrong, and why factory-integrated security beats bolt-on retrofits every time.

Why data center physical security is now three planes, not one

Here is the mistake most operators make. They think "physical security" means cameras, locks, and a guard at the gate. That was the 2005 model. It is not the 2026 model.

A modern data center, especially a containerized or modular one, is simultaneously a physical enclosure, an electrical installation, and a networked control system. Security controls have to cover all three planes, or attackers walk through the plane you ignored.

Plane 1 is physical. Doors, walls, fences, cabinets. This is what EN 50600-2-5 governs.

Plane 2 is the OT network. Building management systems, PDUs, UPS controllers, cooling controllers, BMCs on servers. Every one of these is an embedded computer with firmware, an IP address, and a management protocol. If an attacker reaches the OT network, they can power off the facility, disable cooling, or trigger a false fire suppression discharge. They do not need to touch the IT network to take you down.

Plane 3 is firmware and supply chain. The module shows up as an integrated stack. Inside are BMCs, gateways, PLCs, cooling unit controllers, and access control panels. Each one is a firmware surface. Each one needs signed updates, secure boot, and a documented patch path. Miss this, and your physical perimeter is irrelevant because the attacker was in the supply chain before the module shipped.

A physical security program that only addresses Plane 1 fails the NIS2 audit. A program that only addresses Plane 2 fails the insurance inspection. You need all three, designed together.

The European standard: EN 50600-2-5 protection classes

EN 50600 is the European standard series for data centres, published by CENELEC. Part 2-5 specifically addresses physical security. It does three things: defines availability and protection classes, specifies constructional and organizational requirements, and sets out technological controls against unauthorized access, fire events, and external events.

The standard defines four protection classes. They apply to individual spaces within the facility, not the whole site, which matters because staging areas and IT rooms typically need different classes.

EN 50600-2-5 Protection Classes
Protection Class Typical area What it requires
PC1 Public or semi-public spaces, reception, loading dock Basic access control, visitor logging
PC2 Staff-only administrative areas Authenticated access, single-factor, audit trail
PC3 IT rooms, power rooms, cooling plant Dual-factor access, continuous CCTV, breach detection, environmental monitoring
PC4 High-security cage, classified compute Anti-tailgating, cabinet-level access, tamper-evident sealing, immediate alarm escalation

ISO/IEC 22237 is the international equivalent, published in 2022 as the globalization of EN 50600. If you are specifying a module for a site outside Europe, reference both. Tender language should specify the protection class per area, not a blanket requirement for the whole module.

One caveat. EN 50600 conformity can be assessed and declared. It is not automatically "certified" in the way buyers often assume. Certification requires a registered certification body such as TÜViT or EPI, and the site certification has a 3-year validity with annual surveillance audits. When a vendor says "EN 50600 compliant," ask which class, which part, and whether the declaration is self-issued or third-party assessed.

For the full regulatory picture including EED and EnEfG, see our EU data center regulations 2026 guide.

NIS2: why physical security is now a cybersecurity regulation

The NIS2 Directive (Directive (EU) 2022/2555) came into force on 17 October 2024, replacing NIS1. It is the most consequential change to data centre compliance in a decade, and most operators are still behind on the implementation.

Three things matter for physical security.

First, scope. NIS2 explicitly names "data centre service providers" as a covered sector under digital infrastructure. Commission Implementing Regulation (EU) 2024/2690, published October 2024, sets out the technical and methodological requirements specifically for data centre service providers. If you have more than 50 employees or more than EUR 10 million in revenue and you operate a data centre in the EU, you are in scope. Smaller operators can be in scope if they are deemed critical to sectoral dependencies.

Second, what it requires. Article 21 of NIS2 mandates access control, network segmentation, cryptography and key management policies, supply chain security, incident handling, and business continuity planning. It explicitly requires that incidents be reported to the national CSIRT within 24 hours of detection, with a full report within 72 hours and a final report within one month. ENISA published technical implementation guidance in 2025 that maps each Article 21 requirement to ISO/IEC 27001, NIST CSF 2.0, and CEN/TS 18026:2024.

Third, management liability. NIS2 introduced personal liability for management bodies that fail to implement or oversee cybersecurity risk management. This is new. Your CTO is now personally exposed for a physical security control that was signed off and never implemented.

In January 2026, the Commission proposed targeted amendments to simplify NIS2 compliance for smaller entities, but the core obligations for data centre operators remain.

Layered physical access: perimeter, enclosure, room, cabinet

Defense in depth is an old concept. For a modular data center, it translates into four access layers. Each one has a specific job. Miss a layer and you have no redundancy.

Layer 1: site perimeter

The site perimeter is the first barrier. For a fixed ground-based deployment, this means a fence, CCTV coverage of the approach paths, vehicle barriers at the gate, and a staffed or remotely monitored gatehouse. For a rooftop or parking-lot deployment, the perimeter is the building envelope, and access has to be coordinated with the host facility's existing security.

Anti-ram barriers are a specific procurement item. If the site is adjacent to a public road, the perimeter should include crash-rated bollards or Jersey barriers. This is outside the module scope, but it must be specified in the overall site design, because no amount of door hardening stops a vehicle-borne attack.

Layer 2: module enclosure

The outer door of the module is the enclosure boundary. Specify the lock type (mechanical, electronic, or dual), the access control mechanism, and the door rating. For high-security deployments, specify a burglary resistance class per EN 1627, ranging from RC2 through RC6. This is not standard on every vendor's catalog module. Some treat it as a configurable upgrade. Confirm in writing.

All entries must be logged with user identity, timestamp, and whether entry was authorized or a breach event. The log should export via syslog or equivalent to a central monitoring system so that a local compromise does not destroy the audit trail.

Layer 3: room or aisle

Inside the module, hot-aisle or cold-aisle containment creates a second boundary. Containment panels should have their own access control on the aisle doors. This is where dual-factor authentication matters. Card plus PIN, or card plus biometric, for Protection Class 3 and above per EN 50600-2-5.

Anti-tailgating controls belong here for PC4. Mantrap doors are overkill for most edge deployments. A simpler option is a weight sensor or turnstile interlock that allows only one person through per authentication event.

Layer 4: cabinet

The cabinet is the last layer and the most frequently overlooked. Most cabinet locks in the field are still mechanical keys. In a multi-tenant or co-managed deployment, cabinet-level electronic access control is a hard requirement, not a nice-to-have. Look for locks that report open/close events to the monitoring system and that can be remotely disabled if a credential is revoked.

For high-security deployments, tamper-evident sealing on the cabinet and on individual 1U/2U server mounting rails adds a forensic layer. If someone opens the cabinet between audits, the seal tells you.

For a full breakdown of what to specify per interface in a container module procurement, see our container data center specification guide.

CCTV: the spec that matters more than resolution

Most CCTV specifications in data center RFPs are written like consumer product ads. "4K resolution, night vision, 30-day retention." None of that is wrong. All of it misses the point.

CCTV in a data center has three jobs, in order of importance:

  1. Deter. A visible camera changes the behavior of the people who know they are being recorded.
  2. Detect. An alert when something happens that should not be happening.
  3. Evidence. A tamper-proof record when a forensic investigation is needed.

A CCTV system that only does job 3 is a liability, not an asset. You find out what happened after the attacker is gone.

What to actually specify:

  • Coverage. Cameras at every entry point (site gate, module door, aisle doors), every cabinet row, and every cable penetration. Dead zones are where attackers operate.
  • Tamper alerting. If a camera is blocked, moved, or loses power, the system alerts within seconds. This is the single most important spec and the one most often missing. A camera that fails silent is a camera that was defeated.
  • Retention. 30 days minimum for PC3, 90 days for PC4. Retention must be enforced at the storage layer and documented for audit.
  • Integration with access control. The CCTV system should tie video clips to badge events. When a door opens, the 30 seconds before and after are tagged automatically. This turns a multi-hour forensic review into a two-minute one.
  • Analytics. Motion detection at the cabinet row level, intrusion detection at the aisle, and loitering detection at entry points. Modern NVRs include these out of the box; the spec just needs to require them.
  • Storage isolation. The CCTV NVR is on the OT network (see next section), not the IT network. It is also not internet-exposed. Remote access is via jump host or VPN, not a cloud service that phones home.

One thing to get right about privacy. In the EU, CCTV recording is personal data processing under GDPR. The DPIA for the installation needs to exist, signage needs to be in place, and retention needs to be justified against the processing purpose. This is a compliance step that a surprising number of operators skip.

The OT network is not the IT network. Treat them that way.

This is where most data center security programs break. The OT network is the operational backbone of the facility: BMS, PDU controllers, UPS management cards, cooling unit controllers, access control panels, CCTV NVRs, fire panel interfaces. It is also the attack surface most operators leave flat with the corporate LAN.

Flat networks are how ransomware gets from a phished sales laptop to the PDUs that feed your racks. Ask the KakaoTalk team.

IEC 62443 is the international standard series for industrial automation and control systems security. It introduces two concepts that belong in every data center security spec: zones and conduits. A zone is a group of assets with shared security requirements. A conduit is a defined, monitored path between zones. Everything else is forbidden.

For a modular data center, the minimum zone structure looks like this:

IEC 62443 Zone Structure for a Modular Data Center
Zone Contents Security level (IEC 62443)
IT production Servers, storage, switches carrying tenant workloads SL 2-3, depending on tenant
OT management BMS, PDU, UPS, cooling controllers, BMCs SL 2 minimum, SL 3 for critical infrastructure
Physical security Access control panels, CCTV NVR, door sensors SL 3, isolated from both IT and OT management
DMZ Remote monitoring gateway, vendor support jumpbox SL 2, with strict firewalling on both sides

The conduits between these zones are where the firewall rules and ACLs live. Inbound traffic from the IT production zone to the OT management zone should be zero under normal conditions. Outbound telemetry from OT to a central monitoring system is a one-way conduit, ideally implemented as a data diode or a unidirectional gateway.

IEC 62443 defines four security levels:

  • SL1: protection against casual or coincidental violation
  • SL2: protection against intentional attack with simple means
  • SL3: protection against sophisticated attack with moderate resources
  • SL4: protection against nation-state-grade attack

For a commercial enterprise data center, SL2 for OT management and SL3 for physical security is a reasonable baseline. For defense, critical infrastructure, or a facility with EMP shielding, go to SL3 and SL4 respectively.

NIS2 does not explicitly cite IEC 62443, but the ENISA implementation guidance maps NIS2 Article 21 requirements to the IEC 62443 foundational requirements. If you implement 62443 zones and conduits properly, you have done most of the NIS2 network segmentation work already.

Protocol hardening: the free security upgrade everyone ignores

Here is the easiest, cheapest security improvement available, and it is not done anywhere near often enough. Turn off the insecure management protocols on your OT devices. Turn on the secure ones. That is it.

Most shipping data center hardware comes with management interfaces enabled by default that should never see production traffic. The hardening checklist:

SNMPv3, not SNMPv1 or v2c. SNMPv1 and v2c send community strings in clear text. They are credentials on a postcard. SNMPv3 adds authentication (HMAC-SHA) and privacy (AES encryption). Every SNMP-managed device built since 2010 supports v3. Configure it, set a strong passphrase, and disable v1 and v2c at the agent.

HTTPS, not HTTP, for every web management interface. TLS 1.3 where the device supports it, TLS 1.2 minimum. Self-signed certificates are acceptable on an isolated OT network if you have a documented trust chain. Public CA certificates are better if the device supports ACME.

SSH, not Telnet. SFTP, not FTP. Telnet and FTP should be disabled everywhere. If a vendor's device only supports Telnet for management, that device should not be in your procurement shortlist. There is no legitimate reason for a 2026 data center product to ship with Telnet enabled.

Redfish or secure IPMI, not legacy IPMI. IPMI 2.0 has known authentication bypass vulnerabilities. Redfish is the modern replacement, built on HTTPS with proper authentication. Specify Redfish support at procurement.

TLS 1.3 for log export and remote management. Centralized structured logging (syslog over TLS, or a modern SIEM agent) is the baseline. Plain UDP syslog is a liability.

Disable unused services. Every open port is an attack surface. Document which services are needed, disable the rest at the device. This is tedious and boring and it is the single most effective security measure available.

One more thing. Change the default credentials before the module leaves the factory floor. A surprising number of modular data center deployments go live with vendor default passwords on the BMS, the PDUs, and the BMCs. Factory acceptance tests should include a credential rotation checklist, signed and dated, before FAT sign-off.

Supply chain and firmware governance

Every embedded controller in the module is a computer. Every one of those computers runs firmware. Every one of those firmware images is a potential backdoor. This is the part of physical security that a padlock does not solve.

The 2025 Verizon Data Breach Investigations Report reported that third-party involvement was implicated in a rapidly growing share of breaches, with vulnerability exploitation as a major driver. When the firmware in your PDU has a known CVE and no patch path, you are one public disclosure away from being exploited.

The procurement spec should require, for every embedded controller in the module:

  • Component provenance documentation. Country of manufacture, chip vendors, firmware sources. This is a bill of materials at the firmware level. NIS2 Article 21 explicitly requires supply chain security measures.
  • Secure boot. The device validates a cryptographic signature on its firmware before executing it. Without secure boot, an attacker who gains physical access can flash malicious firmware that survives every reboot.
  • Signed firmware updates. Updates must be cryptographically signed by the vendor, and the device must reject unsigned images.
  • Trusted update channels. Updates come from a specified, authenticated source, not a random HTTP download.
  • Rollback protection. The device refuses to install firmware older than the current version, preventing downgrade attacks to known-vulnerable releases.
  • Documented recovery. If the firmware gets corrupted, there is a defined, out-of-band recovery procedure that does not require shipping the device back to the vendor.
  • Patch governance. Who is responsible for applying patches, on what schedule, with what testing. This belongs in the service agreement, not in a handshake.

This list used to be aspirational. As of NIS2 and the 2024 EU Cyber Resilience Act, it is the minimum bar.

Factory-integrated data center physical security versus bolt-on retrofit

Here is the argument for modular data centers on this specific topic. A factory-built module, designed with security as an engineering constraint from day one, delivers physical security that a retrofitted facility cannot match at the same cost.

Three reasons.

First, integration is designed, not assembled. When the access control panel, CCTV NVR, BMS, and door sensors are specified together and integrated on the factory floor, the interfaces work. Events correlate. Logs export in a consistent format. When the same components are procured separately and integrated on-site by a systems integrator, the interfaces are a negotiation. The BMS vendor blames the access control vendor. The CCTV system logs to one SIEM, the access control to another.

Second, FAT catches what SAT cannot. Factory acceptance tests include a full security commissioning sequence: door sensor test, access control audit, CCTV coverage verification, protocol hardening checklist, firmware version audit. This happens in a controlled factory environment with the vendor's senior engineers. Site acceptance tests verify the module is installed correctly, not that the security system design is correct. Problems caught at FAT cost hours to fix. Problems caught at SAT cost weeks.

Third, the supply chain is bounded. A factory-integrated module has one supply chain manager who tracks firmware versions and vendor advisories across the whole stack. A retrofitted facility has as many supply chains as it has vendors. Patch governance across a heterogeneous bolted-together facility is an operational nightmare that most operators simply do not perform.

None of this makes modular data centers inherently secure. A poorly specified module is a liability like any other. What it does mean is that the opportunity for a tightly integrated, FAT-verified security baseline is available at modular scale in a way it is not available at traditional data center scale, where security typically gets designed last and installed latest.

For how modular compares to colocation and hyperscale on the overall resilience question, see our piece on why centralized data centers are the new single point of failure.

What to do with this information

If you are writing a specification, pin the protection class per area per EN 50600-2-5, not as a blanket statement. Require IEC 62443 zones and conduits at SL2 minimum for OT, SL3 for physical security. Specify the CCTV tamper alerting and access control integration at protocol level. List every embedded controller in the module and require secure boot, signed updates, and a patch governance plan.

If you are evaluating vendors, ask for the factory acceptance test procedure for security. Not the outcome, the procedure. The vendors who can show you a documented FAT sequence for credential rotation, protocol hardening, and log export verification are the ones who actually do it. The rest are assembling, not engineering.

If you are auditing an existing deployment, start with the protocol hardening checklist above. That is the fastest way to find whether the operator has done the basics. If SNMPv1 is enabled anywhere, the rest of the program is probably paper.

Modular Data Centers by ModulEdge

ModulEdge builds modular data centers with physical security engineered in, not bolted on. Factory-integrated access control, CCTV, and OT network segmentation from day one.

  • 5–150 kW per rack, engineered for edge compute and AI inference
  • Integrated stacks: power, cooling, fire, monitoring, access control, and CCTV
  • EN 50600-2-5 protection classes, IEC 62443 zones and conduits, SNMPv3 / TLS 1.3 hardening verified at FAT
  • Secure boot, signed firmware, and documented patch governance across every embedded controller
  • Designed to meet Tier III/Tier IV principles
  • Typical custom build cycles: 3–6 months

FAQ

What is data center physical security?

Data center physical security is the layered combination of site, building, room, and cabinet access controls, surveillance, environmental monitoring, and operational procedures that prevent unauthorized physical access to IT equipment and detect tampering. In Europe it is specified by EN 50600-2-5, which defines four protection classes. It is regulated by the NIS2 Directive, which as of October 2024 covers data centre service providers as essential or important entities.

What is the difference between physical security and OT network security?

Physical security protects the building and the hardware from people. OT network security protects the control systems that run the building (BMS, PDUs, cooling controllers, access panels, CCTV NVRs) from network-based attacks. In a modern data center you need both, because an attacker who reaches the OT network can disable cooling or trigger fire suppression without ever entering the building. IEC 62443 is the international standard for OT network security, built around the zones-and-conduits segmentation model.

What is EN 50600-2-5?

EN 50600-2-5 is the European standard that specifies physical security requirements for data centres. It was published by CENELEC as part of the EN 50600 series and covers protection against unauthorized access, fire events, and external events. It defines four protection classes (PC1-PC4) that apply to individual spaces within the facility, and it covers constructional, organizational, and technological security solutions. The international equivalent is ISO/IEC 22237.

Does NIS2 apply to data centers?

Yes. The NIS2 Directive came into force on 17 October 2024 and explicitly covers "data centre service providers" under the digital infrastructure sector. Commission Implementing Regulation (EU) 2024/2690 sets out the specific technical requirements for data centre operators, which include access control, network segmentation, cryptography policies, supply chain security, and incident reporting within 24 hours. NIS2 applies to operators with more than 50 employees or more than EUR 10 million annual revenue, and smaller operators may be in scope if they are deemed critical.

What cabinet-level access controls are required for high-security data centers?

Protection Class 4 under EN 50600-2-5, appropriate for classified or high-security compute, requires dual-factor authentication at cabinet level with event logging, tamper-evident sealing on cabinets, anti-tailgating controls at the aisle, and immediate alarm escalation on breach events. Electronic cabinet locks that report open/close events to a central monitoring system are the operational minimum. Mechanical keys alone do not meet the PC4 bar.

How should CCTV tamper alerting work in a data center?

CCTV tamper alerting is a system that detects when a camera is blocked, moved, defocused, sprayed, or loses power, and generates an alert within seconds. Modern IP cameras support this natively through analytics at the edge. The alert must be configured to reach an on-call responder, not just log locally, otherwise it is theater. EN 50600-2-5 requires tamper detection for Protection Classes 3 and 4. A CCTV system without tamper alerting is a forensic record, not a security control, because it only tells you what happened after the attacker defeated it.

What is the minimum protocol hardening for data center OT equipment?

At a minimum: SNMPv3 with authentication and privacy instead of SNMPv1 or v2c; HTTPS with TLS 1.3 for web management instead of HTTP; SSH instead of Telnet; Redfish or secure IPMI instead of legacy IPMI; TLS-encrypted syslog for log export instead of plain UDP. All default credentials rotated before production. Every unused service disabled. This checklist is basic, free, and the majority of operators have not fully implemented it.

Why is factory-integrated security better than on-site integration?

Factory-integrated security is designed as a system rather than assembled from parts. The access control, CCTV, BMS, and environmental sensors share a defined interface spec. Factory acceptance tests verify the complete security baseline including protocol hardening, credential rotation, and log export, in a controlled environment with vendor engineering on hand. On-site retrofitted integration depends on systems integrators negotiating interfaces between multiple vendors, and typical FAT procedures do not exist, so problems surface during operation rather than before commissioning.

Yuri Milyutin

Managing Partner at ModulEdge