Frequently Asked Questions

Common questions from manufacturers about data collection, traceability, quality management, and how 10in6 works in a real plant environment.

What is traceability data?

Traceability data is any information that links a finished product back to every input, process, and person involved in its creation. In manufacturing, this includes raw material lot numbers, machine settings, operator IDs, timestamps, inspection results, and quality check outcomes.

Full traceability means that if a product ever gets recalled, flagged by a customer, or fails in the field — you can pull up a complete birth history of that specific unit or batch quickly and accurately. Without traceability data, manufacturers rely on paper logs that are incomplete, hard to search, and slow to produce during audits.

The two most common traceability models are lot-level (one record per batch of material) and serialized unit-level (one record per individual part). Serialized traceability is more granular and typically required in automotive and medical device manufacturing. Lot-level is more common in food and beverage and process industries.

Effective traceability requires capturing data at the point of production — not reconstructed from memory or paper at the end of the shift. The data needs to be stored in a structured, queryable format so it can be retrieved quickly when a customer or auditor asks.

Can manufacturing equipment send data to ERP systems automatically?

Yes — with the right middleware. Most manufacturing equipment (PLCs, CNC machines, sensors, conveyors) generates real-time data but doesn't speak the same language as ERP systems like SAP, Oracle, or Microsoft Dynamics.

The gap between shop floor equipment and ERP systems exists because they were designed for different purposes. ERP systems manage business transactions — orders, invoices, inventory movements. Shop floor equipment manages physical processes — machine states, cycle completions, sensor readings. Bridging them requires a system that understands both languages.

A Manufacturing Execution System (MES) typically handles this translation. It collects data directly from equipment, structures it into meaningful production records, and passes transactions to the ERP on a scheduled or event-triggered basis — production completions, scrap quantities, labor time, material consumption.

The connection method varies by equipment age and protocol. Modern equipment typically supports OPC-UA or Modbus TCP. Older equipment may require hardware interfaces or direct I/O connections. The ERP side usually accepts data through APIs, flat-file imports, or direct database writes depending on the system and version.

What is the best way to make a shift roster for manufacturing?

The most effective shift rosters in manufacturing account for three things that most spreadsheet-based systems miss: real-time attendance, operator skills and certifications, and production demand by process.

A roster built without visibility into who is certified for which process — or who called in sick — creates quality and safety risks. Supervisors end up assigning whoever is available rather than whoever is qualified, which can lead to quality issues on processes that require specific training or certification.

The foundation of a good shift roster is a current skills matrix — a record of which operators are qualified to run which machines or processes, and when any certifications expire. Combined with real-time attendance data, a skills matrix lets supervisors spot coverage gaps before they become production problems rather than discovering them at the start of the shift.

For shift-based operations with variable demand, layering in production targets by line or cell makes the roster more useful. If one area is running a high-complexity product that day, you want your most experienced operators assigned there — not distributed evenly across the floor.

What data is needed in manufacturing?

The most important manufacturing data falls into four categories:

Production data — output counts, cycle times, target vs. actual performance, and any variance from planned rates. This is the foundation of OEE (Overall Equipment Effectiveness) calculation and the most common starting point for data collection initiatives.

Downtime data — when equipment stopped, for how long, and why. Without categorized downtime reasons, you can't prioritize improvement efforts. Many plants know they have downtime but can't explain where it goes or what's causing it.

Quality data — inspection results, defect types, scrap and rework counts, and SPC (Statistical Process Control) measurements. This tells you whether what you're producing meets specification, and gives you the data to find root causes when it doesn't.

Traceability data — material inputs, operator records, timestamps, and process parameters tied to specific units or batches. Required for customer audits, regulatory compliance, and field failure investigation.

Most plants collect some of this data already — but it tends to be fragmented across spreadsheets, paper logs, and disconnected systems. Consolidating into a single platform is what allows management to make decisions from one trusted source of truth.

How can I improve the quality of my manufacturing process?

Quality improvement in manufacturing follows a consistent pattern: you can't improve what you can't measure, and you can't measure what you can't see.

The first step is digitizing quality checks. Paper-based inspection creates gaps — checks get missed, data is hard to aggregate, and trends don't surface until a customer complains. Digital quality checks with mandatory acknowledgment ensure every check happens on schedule and every result is recorded, making conformance rates visible across lines, areas, and shifts in real time.

The second step is statistical process control (SPC). Rather than catching defects after the fact, SPC monitors critical parameters and signals when trends are moving toward the control limit — before an out-of-spec part is produced. This shifts quality management from reactive to preventive.

Third, track defects at the source. When operators classify defects at the point of detection rather than at the end of the line, root cause analysis is faster and Pareto data is more accurate. You learn not just how many defects occurred, but where in the process they originated and under what conditions.

These three steps — digital checks, SPC, and source-level defect classification — compound. Each one improves the others. Together they create a quality system that catches problems early and generates the data needed to eliminate their root causes.

Can I use Power BI to collect manufacturing data?

Power BI is a visualization and analysis tool — it can display manufacturing data effectively, but it is not designed to collect data from equipment. To use Power BI in a manufacturing context, you first need a system that connects to your machines and structures the raw signals into a usable format.

Many manufacturers attempt to build this connection directly — writing queries against historian databases or pulling from PLCs via custom scripts. This approach often works initially, but creates significant technical debt. The scripts need maintenance, they break when equipment changes, and they require IT resources to support.

The more sustainable approach is to use a dedicated shop floor data collection platform as the foundation. These systems handle equipment connectivity, data engineering, and storage — producing a structured database that Power BI (or any other BI tool) can query for custom analytics and executive dashboards.

It's worth noting that many manufacturers who go this route discover that the standard reporting built into their data collection platform covers most of their day-to-day needs. Power BI tends to be most valuable for cross-functional reporting that combines shop floor data with ERP, financial, or supply chain data.

What is IIoT?

IIoT stands for Industrial Internet of Things — the network of physical devices, sensors, machines, and systems in industrial environments that collect and share data over digital networks.

In practical manufacturing terms, IIoT refers to the connectivity layer that allows production equipment to send data to software systems for monitoring, analysis, and control. A temperature sensor that feeds data to a quality system, a PLC that reports cycle counts to a production tracking platform, or an energy meter that logs consumption per shift — all of these are IIoT applications.

The most common IIoT protocols in manufacturing include OPC-UA (the current standard for interoperability), Modbus (widely used on older equipment), MQTT (common in cloud-connected sensor networks), and proprietary protocols from major PLC manufacturers like Siemens, Allen-Bradley, and Mitsubishi.

IIoT is the enabler; what matters is what you do with the data. Collecting sensor signals without a system to structure, store, and analyze them produces raw data streams rather than actionable information. The value comes from combining IIoT connectivity with platforms that turn signals into production records, quality results, and maintenance events.

How is OEE calculated?

OEE (Overall Equipment Effectiveness) measures how efficiently a piece of equipment or production line is running compared to its maximum theoretical capacity during scheduled production time. It's expressed as a percentage — a score of 100% means producing only good parts, as fast as possible, with no unplanned stops.

OEE is calculated as the product of three components: Availability × Performance × Quality.

Availability — what percentage of scheduled production time is the equipment actually running? A machine that runs for 7 of 8 available hours has 87.5% availability. Unplanned breakdowns, material shortages, and late starts all reduce availability.

Performance — when the equipment is running, how fast is it producing relative to its target rate? A line producing 80 parts per hour against a 100-part target has 80% performance. Minor stoppages and reduced speeds show up here.

Quality — of the total parts produced, what percentage pass inspection? A run of 800 good parts out of 1,000 total is 80% quality. Both scrap and rework reduce this figure.

Multiplied together: 87.5% × 80% × 80% = 56% OEE. World-class OEE is typically considered 85% or above. Most manufacturers are surprised to find their actual number — the losses across all three components compound significantly. OEE is most useful when tracked by individual machine or line, as plant-wide averages can mask where the real losses are occurring.

Why should I collect data in my manufacturing plant?

In most manufacturing environments, decisions are still driven by intuition, informal observation, and end-of-shift summaries that are already hours out of date. Without data, it's difficult to distinguish between a systemic problem and a one-off event, or to know which issue deserves attention first when multiple things are going wrong simultaneously.

Data collection creates the foundation for every improvement initiative. Downtime data tells you not just that equipment stopped, but how often, for how long, and why — making it possible to identify the single biggest source of lost production rather than guessing. Quality data surfaces trends before they become customer escapes. Production data reveals where actual cycle times diverge from targets and by how much.

The shift from reactive to proactive management requires data. A supervisor who can see in real time that a line is trending below target rate can intervene during the shift. Without that visibility, the same problem shows up in next week's meeting as a number on a spreadsheet — too late to do anything about it.

The most common objection to data collection is cost or complexity. In practice, most plants already have the equipment signals available — the gap is in capturing, structuring, and presenting them in a way that operations teams can act on.

How do I start to collect data in my manufacturing plant?

The most common mistake manufacturers make when starting a data collection initiative is trying to collect everything at once. A better approach is to start with the data that connects directly to your top one or two operational pain points — usually output counts and downtime — and build from there once the foundation is stable.

The technical starting point is equipment connectivity. For most plants, this means connecting PLCs, sensors, or control systems using industrial protocols such as OPC-UA or Modbus. Newer equipment typically has these capabilities built in. Older equipment may need a hardware input module or a signal tap on an existing output. An OPC server is often the cleanest way to standardize communication across machines from different manufacturers and eras.

Once connectivity is established, raw signals need to be structured. A pulse from a proximity sensor isn't useful on its own — it needs to be translated into a production record with a timestamp, operator, product, and shift context. Defining this data model before collection begins saves significant rework later.

Storage and access matter as much as collection. Data that lives in a system operators don't interact with doesn't drive behavior change. The most effective implementations put production data in front of the people closest to the process — operators, supervisors, and engineers — in a format they can read and act on during the shift.

How do I improve manufacturing efficiency?

Manufacturing efficiency improvement requires knowing where time is being lost, where quality is being lost, and acting on both fast enough to make a difference — during the shift, not after the fact.

The highest-impact starting point for most plants is downtime analysis. Most manufacturers significantly underestimate how much production time disappears to unplanned stoppages. When downtime is tracked by category — equipment failure, material shortage, changeover, operator absence — patterns emerge quickly. Addressing the top two or three categories typically recovers more capacity than most other initiatives combined.

The second lever is constraint management. In any production system, one process limits the output of the entire line. Improving non-constraint processes doesn't increase throughput — it only creates inventory in front of the bottleneck. Identifying the true constraint and protecting it from upstream starvation and downstream blockage is often the highest-ROI move available to an operations team.

The third lever is quality at the source. Every defect that escapes to the end of the line — or to the customer — is a compounded loss: materials, labor, and machine time consumed on a product that fails. Catching deviations earlier through SPC monitoring and scheduled quality checks prevents defects rather than sorting them after the fact.

These three areas — downtime reduction, constraint management, and quality improvement — interact. Reducing defects increases throughput at the constraint. Protecting the constraint reduces the impact of downtime elsewhere. Sustained progress on all three is how manufacturers compound efficiency gains over time.

What is the difference between Cp and Cpk?

Cp and Cpk are both process capability indices used in Statistical Process Control (SPC) to measure how consistently a manufacturing process produces parts within specification limits. They're often reported together, but they answer different questions.

Cp (Process Capability Index) measures the spread of the process relative to the allowable tolerance range. It answers: "is the process variation narrow enough to fit within spec?" The formula is Cp = (USL − LSL) ÷ (6σ), where USL and LSL are the upper and lower specification limits and σ is the process standard deviation. A Cp of 1.0 means the process spread exactly equals the specification range, leaving no margin for drift. A Cp of 1.33 or above is generally considered capable.

The limitation of Cp is that it says nothing about where the process is centered. A process could have tight, consistent variation but be aimed near one edge of the specification — producing parts that are technically in spec but with almost no margin. Cp would look acceptable while a centering problem goes undetected.

Cpk accounts for both spread and centering. It calculates the distance between the process mean and the nearest specification limit, relative to the process spread: Cpk = min[(USL − Mean) ÷ 3σ, (Mean − LSL) ÷ 3σ]. A Cpk below 1.0 indicates the process is producing — or is at risk of producing — out-of-spec parts. Cpk is always equal to or less than Cp. When the two values are close, the process is well-centered. A large gap between them signals a centering problem that Cp alone would miss.

In practice, Cpk is the more meaningful index for day-to-day quality monitoring. Tracking both gives the most complete picture: Cp tells you whether the process is inherently capable, Cpk tells you whether it's currently performing that way.

How do you calculate pieces per labor hour (PPLH)?

Pieces per labor hour (PPLH) is a productivity metric that measures how many units are produced for every hour of direct labor invested. It normalizes output against the labor applied, making it meaningful to compare across shifts, lines, or time periods even when crew sizes differ.

The calculation is straightforward: PPLH = Total Units Produced ÷ Total Direct Labor Hours. For example, a line that produces 480 parts during an 8-hour shift with four operators has consumed 32 total labor hours — a PPLH of 15. If the same line produces 360 parts the following shift with the same crew, PPLH drops to 11.25 and the variance is immediately visible.

PPLH becomes most useful in trend analysis. A declining figure typically points to one of a handful of causes: increased downtime, slower cycle times, higher rework volumes, absenteeism reducing effective crew size, or a product mix shift toward more complex parts. Tracking PPLH alongside downtime and scrap data usually isolates the root cause quickly.

PPLH has limits worth understanding. It doesn't account for differences in product complexity, machine-constrained operations where adding labor wouldn't change output, or changeover-heavy schedules where a significant portion of shift time is non-production. Like OEE, it works best as one metric within a broader performance dashboard rather than as a standalone target. Using it as the sole productivity measure can inadvertently incentivize rushing at the expense of quality.

What is the difference between an MES and a CMMS?

An MES (Manufacturing Execution System) and a CMMS (Computerized Maintenance Management System) address different aspects of plant operations — though in modern platforms they're often integrated or built to share data.

An MES focuses on production: tracking what is being made, how fast, by whom, and with what results. It connects to production equipment to capture output counts, cycle times, downtime events, quality results, and traceability data in real time. The primary users are production operators, supervisors, and operations managers.

A CMMS focuses on maintenance: managing work orders, scheduling preventive maintenance, tracking parts inventory, and recording equipment repair history. When a machine breaks down, the CMMS is where maintenance technicians document what failed, what was done to fix it, and how long it took. The primary users are maintenance technicians, maintenance supervisors, and reliability engineers.

The two systems intersect most significantly around equipment downtime. When an MES detects that a machine has stopped, it can automatically generate a work order in the CMMS — eliminating the gap between a production stoppage being detected and a maintenance request being created. Maintenance completion data flowing back to the MES closes the loop.

In plants where the two systems are separate and don't communicate, this handoff typically happens by phone, radio, or paper — introducing delays, miscommunication, and gaps in the maintenance record. Integration between MES and CMMS is one of the more impactful connectivity improvements a plant can make.

Does an MES replace my ERP?

No — an MES and an ERP serve different purposes and operate at different levels of the business. They are complementary systems, not alternatives.

An ERP (Enterprise Resource Planning) system manages business-level transactions: customer orders, purchase orders, inventory valuation, financials, HR, and scheduling at a high level. It operates in terms of orders and quantities — what has been ordered, what has been shipped, what is in inventory.

An MES operates at the shop floor level, in real time. It tracks production as it happens — which machine is running, what product is being made, how many pieces have been completed, what downtime has occurred, what quality checks have been performed. An MES works in seconds and minutes. An ERP works in hours and days.

The value of connecting an MES to an ERP is that shop floor data flows automatically to business systems without manual entry. Production completions post to inventory, labor hours flow to payroll, quality results attach to work orders — all without operators entering data twice. This reduces transcription errors and gives ERP data the accuracy of the machine-collected source.

Most manufacturers run both systems. The question isn't whether to replace the ERP, but how tightly to integrate the two — and which transactions are worth automating versus managing manually.

How do I reduce downtime in manufacturing?

Reducing downtime starts with understanding it accurately. Most plants underestimate their true downtime because small stops — under a few minutes each — go unrecorded. These micro-stoppages can account for a significant share of lost production time while never appearing on any report. The first step is capturing all downtime, not just the events someone remembered to write down.

The second step is categorization. Raw downtime minutes are not actionable. Downtime categorized by reason — equipment failure, changeover, material shortage, operator absence, tooling, quality hold — tells you where to focus. In most plants, two or three categories account for the majority of lost time. Those are the ones worth addressing first.

From there, the approach depends on the category. Equipment failures are best addressed through preventive maintenance — scheduled inspections and part replacements before failures occur, informed by equipment history and manufacturer intervals. Changeover time is typically reduced through SMED (Single-Minute Exchange of Die) methodology: separating internal setup steps (done while the machine is stopped) from external ones (done while it's still running). Material-related downtime usually points to scheduling or supply chain issues upstream.

One principle applies across all categories: downtime that is measured and visible gets reduced. When operators and supervisors can see downtime data in real time — by machine, by reason, by shift — it changes behavior. Problems get escalated faster, root causes get investigated sooner, and patterns that would have been invisible in weekly summaries become obvious.

How long does it take to implement an MES?

MES implementation timelines vary significantly depending on the scope of the deployment, the number of machines being connected, the complexity of the data model, and the maturity of the plant's existing infrastructure.

For a focused initial deployment — one or two production lines, standard data collection (output, downtime, quality checks) — a well-structured MES implementation typically takes four to eight weeks from project start to live data. This assumes equipment connectivity is straightforward and the plant team is available to participate in configuration and testing.

Larger deployments covering multiple lines, complex traceability requirements, ERP integration, or custom reporting can take three to six months. The connectivity work — getting signals from every machine into the system reliably — is usually the longest pole in the tent.

A few factors consistently extend timelines: equipment that requires custom hardware interfaces, ERP integrations that need IT involvement on both sides, unclear data requirements at the start of the project, and plant teams that are too stretched to dedicate time to validation and testing.

The most common mistake in MES implementations is trying to go live everywhere at once. A phased approach — starting with the highest-priority lines and expanding from a proven foundation — typically delivers faster results and exposes fewer integration problems at once.

Have a question we didn't answer?

We're happy to talk through your specific plant challenges and how 10in6 might apply.