Overview of Beverage Packaging Process
Depalletizing to Palletizing Video: (https://www.youtube.com/watch?v=YTVy2rtPmpY)
1. In a typical high-speed beverage packaging line, empty containers are fed at the start and
finished products are stacked on pallets at the end. The process begins with depalletizing
– removing bottles or cans from bulk pallets and feeding them onto the line layer by layer
.
2. Next, containers may pass through a rinser or air rinse for cleaning, then enter the filler
which rapidly dispenses product into them. For example, Asahi’s Laverton plant runs
filling stations at about 1000 bottles per minute.
3. After filling, containers are sealed (capped or seamed) and often pasteurized (heated to
~50–70 °C) to stabilize the product. They then move to a labeller, which in a fraction of a
second applies front, back, and neck labels to each bottle.
4. Subsequent equipment groups the containers into packs (six-packs, cases, etc.) and places
them into cartons. Finally, a palletizer stacks the packed cases onto pallets, which are
stretch wrapped for distribution.
5. Throughout this sequence, conveyors and accumulation tables synchronize the flow
between machines, creating a “mechanized hi-speed conga line” of products.
Industry-Specific Reliability Challenges
Beverage packaging lines are highly automated and operate at very high speeds, so even
minor stoppages can cause upstream/downstream backups. The equipment is complex and
sensitive, requiring constant attention.
Small misalignments or jams (e.g. a slipped label or a broken bottle) can halt the entire line.
The industry also faces strict hygiene and quality standards – frequent washdowns and
sanitization can introduce wear or moisture issues, and any deviation can risk product safety.
Moreover, many beverage plants run 24/7 to meet demand, which leaves limited windows for
maintenance. This leads to a challenging environment where packaging machinery is often
seen as more prone to downtime than other equipment in food/beverage production.
High throughput, sticky liquids (sugars, etc.), and constant container handling can cause
accelerated component wear (e.g. filler valves, conveyor motors) and frequent adjustments. If
not properly addressed, these factors result in unplanned downtime, lost production, and
costly interventions. Maintaining reliability under these conditions requires robust
maintenance practices and careful design to handle the stress of continuous operation.
Key Equipment in Beverage Packaging Lines
Beverage packaging lines consist of several critical machines, each with specific functions
and maintenance needs from start to finish:
Depalletizer: Automatically unloads containers from pallets and feeds them onto the line. It
removes bottles or cans layer by layer from the pallet and places them on conveyors.
Maintenance: Keep sensors, sweep arms, or vacuum pick mechanisms aligned and clean.
Clear out broken glass or debris promptly.
Failure modes: Jams due to misaligned layers or slip-sheets, dropped containers, or
motor/drive failures can all halt input supply.
Filler: Often called the “heart” of the line, the filler meters the product into containers with
high precision. There are many types (pressure fillers for carbonated drinks, gravity fillers for
still liquids, etc.), sometimes integrated as a monoblock with a capper or rinser.
Maintenance: Requires regular sanitation (CIP – Clean-In-Place) to meet hygiene standards,
plus calibration of fill volumes. Critical components include valves, seals, and filling heads,
which wear over time and need refurbishment.
Common issues: Inconsistent fill levels or spillage (from worn seals or nozzle issues),
foaming or air in product causing fill variation, and electronic sensor faults. Even slight filler
malfunctions can cause line stoppages or product waste, so maintaining it is top priority.
Capper/Seamer: Immediately after filling, bottles are capped, or cans are seamed closed.
Cappers apply caps (metal crown caps, plastic screw caps, etc.) and seamers roll the lid onto
cans.
Maintenance: Ensure cap delivery chutes and seaming rolls are adjusted and lubricated.
Verify torque or seam tightness regularly.
Failure modes: Misfeeds (jams if a cap/can is out of place), inconsistent torque or seam
quality leading to leaks, and mechanical wear in the capping heads. For instance, common
capper problems include misaligned caps or insufficient torque, often requiring re-calibration
or parts replacement.
Pasteurizer (if used): Many beers and some soft drink lines include a tunnel pasteurizer after
capping. It heats containers to kill microbes and then cools them.
Maintenance: Monitor pumps, spray nozzles, and heat exchangers; prevent bottle breakage
inside.
Issues: Temperature control faults can under- or over-pasteurize (risking quality), and
sediment or bottle leaks can foul the water circulation, requiring cleaning.
Labelling Machine: Applies labels to containers at high speed – this could be cold-glue
labels, pressure-sensitive stickers, or shrink-sleeves depending on the product.
Maintenance: Keep label magazines or reels loaded and aligned, clean adhesive application
parts, and calibrate sensors/photocells for label placement.
Common failure modes: Label jams or tears, glue nozzle clogs, and registration errors
(misalignment). Notably, industry surveys indicate labelling/coding machines are among
the most failure-prone packaging machines, with many operators rating them “extremely
likely” to cause downtime. Even minor labeler hiccups (e.g. a torn label) can force a line
stoppage to avoid unlabelled product.
Case Packer/Cartoner: Once individual containers are ready, case packers aggregate them
into larger bundles (e.g. putting bottles into a carton or crate). For example, a case packer
places the filled, capped, and labelled bottles into a cardboard box in the correct configuration
. Some lines use shrink wrappers to wrap multipacks of cans.
Maintenance: Ensure smooth operation of pack formation – lubricate mechanical packer arms
or guides, verify carton magazines feed properly, and fix worn suction cups or grippers.
Failure modes: Cardboard cases can jam if not erected fully, or bottles can get knocked over
if timing is off. A case packer jam will block the entire outfeed, so sensors and interlocks
must work correctly to stop upstream flow and prevent a pile-up.
Palletizer: This end-of-line machine stacks the finished cases or trays onto pallets in stable
layers. It may be a traditional layer palletizer or a robotic arm.
Maintenance: Keep pallet supply steady, check layer-forming mechanisms or robot
calibration, and maintain safety sensors (as palletizers handle heavy loads).
Common issues: Mis-stacks or dropped cases if a pusher or gripper malfunctions, sensor
faults causing misalignment, or conveyor failure feeding the palletizer. The palletizer’s
reliability is crucial – a breakdown here means products accumulate with nowhere to go,
effectively freezing the whole packaging operation.
Conveyors & Accumulators: Though not always “critical equipment” by name, the network
of conveyors that link these machines is vital.
Maintenance: Regularly adjust and replace belts, chains, bearings, and motors; clear any
bottle caps, broken glass, or spilled liquid that could cause jams or slip hazards. Many lines
have accumulation tables to buffer product flow; if these fail (e.g. jam or sensor error), they
can’t protect the line from minor upstream/downstream stoppages. In fact, synchronization of
all these parts is critical – “the smallest mistake in your packaging line can slow the whole
operation down”.
Reliability Concerns
Each machine has distinct failure modes, but a common theme is that one equipment failure
can idle the entire line. For instance, if the filler stops, everything upstream (depalletizer,
rinser) must pause once accumulation is full, and everything downstream (capper, labeler,
etc.) starves shortly after. This interdependence makes bottling lines particularly sensitive to
reliability issues.
Frequent start-stop cycles from minor faults can also reduce Overall Equipment
Effectiveness (OEE) due to lost time and slower speeds. Thus, maintenance must be holistic
– not just fixing individual machines, but ensuring the entire sequence runs smoothly. Typical
reliability concerns include abrasive wear (on fillers, cappers), material fatigue (conveyor
links, gearboxes), sensor drift or failure (mis-detecting bottles/labels), and repetitive
shock/vibration (e.g. palletizer arms or depalletizer lifts). By understanding common failure
modes – such as those in fillers (leaking valves), cappers (loose caps), labelers (jams), etc. –
maintenance teams can target preventive measures (spare parts on hand, timely overhauls,
condition monitoring) to keep the line running with minimal unplanned stops.
Operations at Laverton and Abbotsford Sites
Laverton (Asahi Beverages):
Asahi’s Laverton facility in Victoria is a major production site that functions as a multi-
beverage brewery and packaging plant. According to company information, this is Asahi’s
main Australian brewery (aside from the CUB sites acquired later).
The Laverton plant produces beer, ready-to-drink mixed beverages (RTDs), and soft
drinks on high-speed lines. It was designed with advanced, automated packaging capabilities
– as of 2013, it had one bottling line and one canning line, each able to fill ~1000 containers
per minute. Such speeds are world-class, requiring precise coordination of depalletizers,
fillers, and palletizers. A tour of the facility highlighted its just-in-time supply of packaging
materials (bottles/cans delivered within 24 hours of use) and fully automated flow from
depalletizing through filling, pasteurizing, labeling, packing, and palletizing.
After an expansion in 2018, Laverton also added brewing capacity (e.g. new cereal cooker,
fermenters) to increase beer output. The site even includes a small R&D lab and a keg line
(largely automated by a robot arm) for draught beer packaging. In operations, Laverton is
relatively modern – built with state-of-the-art Krones equipment and an “automation-first”
approach (one operator can oversee the keg line, for example). This likely means
maintenance practices at Laverton are aligned with contemporary standards (predictive
sensors, streamlined CIP, etc.), but the high speed and variety (beer and RTD and cider) pose
their own challenges.
Notably, Laverton bottles Somersby cider for Asahi and brews brands like Cricketers Arms
craft beer on site. The site’s focus on fast changeovers and multiple product types requires
flexible maintenance – teams must handle cleaning and adjustments between, say, a beer run
and a soft drink run, without sacrificing reliability.
Abbotsford (CUB/Asahi Brewery):
The Abbotsford Brewery in Melbourne is one of Australia’s most historic and iconic
breweries, in operation since 1904. Formerly the flagship brewery of Carlton & United
Breweries (CUB), it became part of Asahi in 2020. This site has a massive production
capacity dedicated mainly to beer (and some cider) – both beer and cider are produced at
Abbotsford in large volumes.
The facility brews famous beers such as Victoria Bitter and Carlton Draught, and with Asahi’s
acquisition, it continues to produce those legacy brands under the Asahi Beverages umbrella.
Operationally, Abbotsford is a large-scale plant with multiple packaging lines. While specific
line speeds are not published, it likely has separate bottling lines (for stubbies and longneck
bottles) and canning lines, as well as keg lines, to handle the broad portfolio. (CUB’s range
includes many package formats, so Abbotsford must accommodate different bottle sizes,
cans, and perhaps specialty packaging.)
The site has undergone modernization in parts – for example, over 4,000 solar panels were
installed on its roof to supply power, showcasing a commitment to sustainability. However,
given its age, Abbotsford’s equipment mix may include older infrastructure alongside
newer upgrades. This can present maintenance challenges: legacy machines might require
more frequent care and creative retrofits to meet today’s efficiency standards. Public info
notes that despite being an old plant, it continues to improve; for instance, Asahi/CUB
achieved a milestone of brewing 50 millionth solar-powered beer there, implying ongoing
investment in the site.
In terms of layout, Abbotsford is an inner-city brewery with a tight footprint. Aerial views
show a dense complex of brewhouses, fermenters, and packaging halls in close proximity
(contrasting with Laverton’s more spacious industrial estate setting). This means
maintenance and operations at Abbotsford must deal with space constraints for equipment
installation and storage. The workforce at Abbotsford includes highly experienced brewers
and engineers, given its long history – an advantage for troubleshooting. But a cultural shift
might be underway as Asahi integrates Abbotsford into its maintenance philosophy. It’s likely
that standardizing practices between Laverton and Abbotsford is a goal: for example,
ensuring both sites use similar maintenance management systems (CMMS) and KPIs, and
share best practices.
Site-Specific Insights:
Laverton’s operations exemplify a newer, leaner approach – fewer operators due to
automation, a wide array of beverages produced, and an ethos of “more with less” (fast runs,
quick changeovers). The maintenance team there probably emphasizes preventive and
predictive tactics to avoid slowing that 1000/min throughput.
Abbotsford, with its scale, might focus on managing high volumes – ensuring that any
downtime is minimized because lost output of core beers is very costly. It likely has more
redundancy in packaging lines (multiple lines for different products), whereas Laverton might
have fewer lines handling many products. Abbotsford’s sheer size also means maintenance
coordination is complex: different areas (brew house, packaging, utilities) need to work in
concert. The site’s recent improvements (solar power, possibly water recycling or packaging
upgrades) show a drive to modernize, but integrating new technology with old equipment can
test reliability. For example, if new monitoring sensors are added to decades-old machinery,
data quality and training become important.
In summary, Laverton stands out for high-tech, multi-product packaging operations with
cutting-edge speed, whereas Abbotsford stands out for high-volume, legacy operations of
classic beers. Both sites face the common industry challenge of keeping packaging lines
efficient and reliable, but their strategies may differ: Laverton might be a testbed for
innovation (new maintenance tech, trialing condition monitoring on a smaller scale), and
Abbotsford leverages tried-and-true methods (robust preventive maintenance schedules,
experienced technicians) scaled up for a big plant. Aligning these operations under one
company offers an opportunity for cross-learning – e.g., applying Laverton’s modern
techniques to Abbotsford, and Abbotsford’s depth of experience to Laverton.
Maintenance Strategies
In managing equipment reliability, Asahi (like any manufacturer) can choose from several
maintenance strategies. The main approaches are Breakdown (Reactive) Maintenance,
Preventive Maintenance, and Condition-Based (Predictive) Maintenance – each has its role,
with distinct pros and cons:
Breakdown Maintenance (Reactive): “Run to failure” – you fix or replace equipment only
after it breaks. The benefit is maximum utilization of components (you squeeze every bit of
life out of them) and minimal upfront effort in planning. In the short term, this can reduce
routine maintenance labor and avoid unnecessary part changes.
However, the downsides are significant: unplanned downtime can be extensive and chaotic,
and a failure can cause secondary damage (a minor part breaking might damage a larger
assembly). Costs tend to be higher in the long run due to collateral damage and overtime
repairs. In a high-speed bottling context, pure reactive maintenance would mean frequent line
stoppages – “Downtime would be disproportionately high” if components are simply left to
fail.
This approach is typically only suited for non-critical or redundant equipment where an
unexpected failure won’t severely impact safety or production. Over-relying on reactive fixes
can shorten asset life and lead to crisis management mode in maintenance.
Preventive Maintenance (PM): A proactive strategy of performing regular, scheduled
maintenance tasks (inspections, parts replacements, lubrications) regardless of current
condition. This could be time-based or usage-based (e.g. service a filler every 6 months or
every 1000 hours). The goal is to prevent failures before they happen. Benefits include lower
overall failure rates and more predictable downtime – equipment is less likely to surprise you
with a breakdown if it’s routinely cared for. Companies see less unplanned downtime and
fewer malfunctions with a good PM program.
Planned maintenance can be scheduled in off-shifts or slower periods, minimizing production
impact. The trade-offs: PM can be resource-intensive – it requires planning, scheduling, and
possibly taking machinery offline regularly. It can also lead to “over-maintenance,” where
you service or replace parts that still have useful life. This incurs extra cost. For example,
replacing a bearing every 3 months regardless of condition might waste bearings that could
last 6 months. There’s also the risk that maintenance itself (if done incorrectly) can introduce
errors.
Nonetheless, in beverage plants, a well-designed PM schedule (perhaps guided by vendor
recommendations and past failure data) is crucial. It strikes a balance: slightly increased
planned downtime, in exchange for much fewer emergency stoppages. Preventive
maintenance is especially important for known wear-and-tear items like filler seals,
capping chucks, gearboxes, etc., and for ensuring safety devices (like interlocks) always
work.
Condition-Based / Predictive Maintenance: This approach goes a step beyond fixed
schedules by using actual equipment condition data to decide when maintenance is needed. It
encompasses techniques like condition monitoring (vibration analysis, thermal imaging, oil
analysis, sensor readings) and predictive analytics (using trends to predict failures). The idea
is to perform maintenance only when indicators show signs of deterioration, so you avoid
both catastrophic failure and unnecessary early maintenance. For example, rather than
overhauling a compressor every 6 months, you monitor its vibration or temperature and only
intervene when those metrics start to drift from normal.
The benefits can be big: you eliminate most unexpected failures (because you catch issues
early) and also avoid replacing parts that are still good. Predictive maintenance, when fully
implemented, reduces both unplanned and planned downtime – one source notes it “removes
the need to run-to-failure or replace a part while it still has life”. It also provides a holistic
view of asset health, which can improve long-term decision-making (like knowing which
machines are problem-prone).
However, this strategy has higher upfront complexity: it requires investment in
sensors/instrumentation, data collection systems, and skilled analysis. Implementing it
involves tackling technology and data management challenges. Maintenance staff need
training to trust and use the data. In an older plant like Abbotsford, for instance, adding IoT
sensors to legacy equipment might require significant retrofitting and debugging.
Furthermore, not every failure mode is easily predictable – some components can fail
suddenly without clear precursors, limiting the effectiveness of condition-based methods in
those cases.
In practice, a robust maintenance program usually combines all three approaches in a
hierarchy based on criticality. Less critical, inexpensive items might be left to run to failure
(e.g. a light bulb in the warehouse, or perhaps a redundant pump). The core production
equipment will have preventive maintenance schedules (lubrication rounds, periodic
overhauls) to keep them reliable. And increasingly, condition-based techniques are layered on
top for critical assets – for example, monitoring a filler’s drive motor vibration or a palletizer
robot’s servo feedback to predict issues before they stop production.
Modern industry best practice leans toward more proactive maintenance. As one food/bev
industry article put it: relying on reactive maintenance alone would cause excessive
downtime and stress, so it’s advisable to “switch from reactive to preventive… or implement
predictive maintenance” for sensitive systems.
Preventive maintenance is the baseline, and where feasible, condition-based maintenance
further optimizes the timing of those interventions. The ultimate goal is zero unplanned
stops: fixing problems just in time before failure, but not too early. This optimizes costs
and uptime simultaneously.
Benefits and Limitations Recap: A Breakdown approach minimizes upfront work but risks
catastrophic failures and high long-term cost. Preventive maintenance is more controlled – it
reduces failure frequency, keeps reliability up, and is easier to budget, but can involve
unnecessary work and more scheduled downtime. Condition-based/Predictive maintenance
promises the best of both worlds (only doing needed work, and avoiding surprises), but
requires technology and expertise investments and works best in a data-rich, mature
maintenance environment.
In implementing these at Laverton and Abbotsford, Asahi will need to consider the current
state of equipment and data: newer equipment might already have sensors for condition
monitoring (e.g. newer Krones lines might provide diagnostics), whereas older lines might
start with a solid preventive regime while gradually adding predictive tools.
Maintenance KPIs and Performance Measurement
To guide and evaluate maintenance effectiveness, organizations use Key Performance
Indicators (KPIs). The following metrics are especially relevant in packaging operations:
Mean Time Between Failure (MTBF): This is the average operating time between two
failures for a given asset. Essentially, it answers “on average, how long does this equipment
run before it needs to be fixed?”. A higher MTBF means the equipment is more reliable.
MTBF is typically calculated as total operational hours divided by the number of failures in
that period.
For repairable assets, it’s a core reliability metric.
Use: Tracking MTBF helps identify troublesome equipment – if one packaging line has an
MTBF of 50 hours while another is 200 hours, the first is far less reliable. Maintenance teams
strive to extend MTBF by addressing root causes of frequent stops. For example, if a labeler
fails on average every 8 hours due to label jams, one might invest in better labels or a
maintenance tweak to improve that interval.
Informing strategy: MTBF trends tell us if reliability is improving with our maintenance
program. If MTBF is predictably, say, 60 days, we know to schedule preventive actions
before that point to pre-empt failures. It also helps balance load – extremely high MTBF
might indicate we’re over-maintaining (or have very robust machines), whereas extremely
low MTBF signals urgent need for maintenance improvement or asset renewal. However,
MTBF must be paired with other metrics; a high MTBF isn’t sufficient if repairs, when they
do happen, take too long.
Mean Time to Repair (MTTR): This measures how quickly a machine is restored after a
failure – on average, the time from a breakdown to running again. It captures maintainability
and the efficiency of the response. A lower MTTR is better (faster fixes). For instance, if a
capper jams, MTTR includes the time to diagnose, fetch parts/tools, fix the issue, and restart
production.
Use: Companies track MTTR to see if maintenance response is quick and effective. Long
MTTR might point to issues like poorly stocked spares, inadequate training, or complex
designs that are hard to repair.
Informing strategy: MTTR feeds directly into availability – a short MTTR can significantly
cut downtime impact. If an asset has a high MTBF (rarely fails) but also a very high MTTR
when it does, that single failure can still be costly. So, maintenance strategy might focus on
reducing MTTR by improving spare parts management or technician training. For example,
keeping a spare sensor on hand could turn a 2-hour sensor failure repair into 20 minutes,
greatly lowering MTTR.
Together, MTBF and MTTR define reliability in tandem: we want long intervals between
failures and short durations to fix. These metrics help decide maintenance frequency – if
MTBF is low, more frequent PM or redesign is needed; if MTTR is high, invest in quicker
recovery measures.
Overall Equipment Effectiveness (OEE): OEE is a comprehensive metric that measures the
productive output of a machine or line relative to its theoretical maximum. It is defined as
OEE = Availability × Performance × Quality.
Availability is the percentage of scheduled time the machine is actually running (uptime),
accounting for downtime losses (equipment failures, changeovers, etc.).
Performance is the speed factor: how the actual throughput compares to the design speed,
accounting for slowdowns or small stops.
Quality is the yield: the fraction of good units produced, accounting for rejects/rework.
OEE effectively captures how well an asset is being utilized. For example, an OEE of 85% is
considered world-class in many industries – it means you’re getting 85% of the theoretical
perfect output (losses of 15% spread across downtime, speed, and rejects).
Use: OEE is great for bottling lines because it highlights where the biggest losses are. If a
filler has an OEE of 60%, we can break it down – maybe availability is low due to frequent
stops (maintenance issue), or performance is low because we often slow the line, or quality is
low from defects (maybe a capper causing leakers).
Informing strategy: Maintenance has a direct impact on the Availability component of OEE
(breakdowns reduce availability). By tracking OEE, maintenance and operations together can
target the largest losses. For instance, if Availability is only 70% due to unexpected
downtime, improving preventive maintenance or quick changeover procedures can boost it.
OEE also helps justify maintenance improvements in business terms – e.g., a 5% OEE gain
might equate to thousands of additional cases produced.
It is a “bridging” KPI between maintenance and production: high reliability (fewer failures)
leads to higher OEE, and conversely chasing very high OEE (by pushing machines too hard)
can sometimes reduce MTBF, so balance is needed. Companies often set OEE targets and use
maintenance strategy (preventive/predictive) to help achieve them by maximizing equipment
availability and throughput.
Cost-Related KPIs: Managing the cost side is crucial as well. Common cost metrics include
maintenance cost per unit produced (e.g. maintenance dollars spent per 1000 liters bottled)
and maintenance cost as a percentage of asset value or replacement asset value (RAV). The
latter is an industry benchmark: maintenance cost/RAV gives an idea of how cost-effectively
assets are maintained.
For instance, a maintenance cost of 5% of RAV per year might be considered normal in
some beverage plants; significantly higher could mean inefficiencies.
Another KPI is planned vs unplanned maintenance cost – tracking how much is spent on
planned activities versus emergency fixes.
Use: These cost KPIs ensure that reliability gains are achieved cost-effectively. A very low
failure rate achieved by exorbitant maintenance spending might not be sustainable, so
cost KPIs keep that in check.
Informing strategy: If maintenance cost as % of RAV is above industry benchmark, it
prompts a review of strategy – maybe there’s over-maintenance or inefficiencies.
Alternatively, a very low maintenance spend could signal under-maintaining (and possibly
hiding reliability problems). By monitoring cost per output, management can see if
improvements (like predictive maintenance) are actually reducing cost per case of beverage
or if they need adjustment. Cost KPIs also feed into decisions like refurbish vs replace: if an
old palletizer’s maintenance costs skyrocket (maybe exceeding, say, 10% of its asset value
yearly), it might be more economical to invest in a new palletizer.
Other KPIs: In maintenance, there are many supporting metrics, such as Planned
Maintenance Percentage (PMP) – the share of maintenance hours that are planned vs
reactive. A high PMP (world-class ~80%+) means most maintenance is proactive. Schedule
Compliance measures if planned work is done on time.
Mean Time Between Stops (including minor stops) and Mean Down Time can be used
specifically on packaging lines to capture all stoppages. Also, Overall Equipment Efficiency
(similar naming to OEE but sometimes calculated differently) and Asset Utilization measure
similar concepts of uptime and output. For performance management, tracking Backlog
(outstanding maintenance work) and maintenance workforce productivity can be useful.
However, MTBF, MTTR, OEE, and cost metrics are among the key pillars for strategic
decisions.
Using KPIs to Drive Decisions
Maintenance KPIs are not just numbers; they directly inform strategy and continuous
improvement. For example, if MTBF is low for a critical filler, it flags that we should
implement more aggressive preventive maintenance or investigate redesigning the
troublesome component – essentially, it pushes a move up the “proactive” scale. If MTTR is
high, that might drive efforts like training operators to handle first-line fixes, improving spare
parts logistics, or redesigning equipment for maintainability (quick-change parts). OEE
trends help justify maintenance upgrades: a rising OEE after a new maintenance initiative
demonstrates its value in increased productivity. Conversely, if OEE is being held back by
low availability, it clearly points to maintenance/downtime as the issue, prompting
investment in reliability programs. Maintenance cost KPIs ensure that any strategy (e.g.
condition-based maintenance) is delivering ROI. For instance, if we invest in vibration
sensors (cost up front) but see breakdown costs and unplanned downtime drop, the cost per
unit will improve, validating the approach.
In summary, these KPIs create a feedback loop. As one reference notes, they are “essential for
assessing equipment health and maintenance efficiency and balancing preventive and reactive
maintenance… critical for informing maintenance strategies by providing insights into
equipment reliability, cost management, and effectiveness of tasks.”
By regularly reviewing MTBF, MTTR, OEE, and costs, Asahi’s maintenance managers can
pinpoint where to adjust their strategy – whether to schedule more PM, adopt a predictive
tool, allocate budget for training, or even retire an asset. They allow maintenance
performance to be quantified and aligned with broader business goals (like maximizing
output at lowest cost without compromising quality or safety).
Data-Driven Maintenance and Statistical Reliability Analysis
Modern maintenance optimization relies heavily on data analysis. However, in practice
maintenance data is often incomplete, inconsistent, or “noisy”, which can complicate
reliability assessments. Here we discuss how to handle such data and best practices for
statistical analysis of maintenance performance:
Challenges with Maintenance Data: Maintenance records may have missing entries (e.g. a
technician fixed a jam but didn’t log it), or inconsistent failure coding (one person’s
“mechanical failure” is another’s “jam”). Sensor data from equipment (vibration,
temperature, etc.) can be noisy – subject to random fluctuations that obscure true trends.
Additionally, not all failures are observed within a given time frame (for example, some
components might not fail during the data collection period, leading to censored data). Bad or
incomplete data can lead to misleading conclusions. As one reliability resource notes,
inaccurate or outdated data can result in incorrect assessments of asset performance and
misguided maintenance decisions.
For instance, if failure dates are recorded incorrectly, a calculated MTBF could be very off,
causing you to schedule maintenance at the wrong interval. Or if only major breakdowns get
recorded but minor stoppages (micro-stops) are ignored, the analysis might overlook a
significant source of downtime.
Handling Incomplete/Noisy Data: The first step is improving data quality at the source.
This means training staff on good data practices (e.g. consistent failure coding, prompt
logging of work orders) and possibly simplifying data entry. A case study from a CMMS
provider described how technicians would log details at end of day from memory, leading to
errors; such delays caused underestimation of true repair times and distorted data. The
solution is often to make data capture easier (mobile apps, quick dropdown menus for failure
cause) and instil a culture of “data matters.” Implementing clear definitions – what constitutes
a “failure” vs a minor stop, how to classify causes – ensures the dataset is consistent. Regular
audits of data can identify gaps: e.g., cross-check production downtime records against
maintenance logs to catch unlogged events.
For noisy condition-monitoring data, statistical filtering techniques are used. Maintenance
analysts often employ moving averages or thresholds to distinguish true signals from noise.
For example, rather than triggering an alarm on one high vibration spike (which could be
random), one might trigger only if a trend of high vibration persists over several readings or
exceeds a set threshold by a significant margin. Smoothing algorithms and outlier detection
(to discard obviously spurious readings) help manage sensor noise. It’s also crucial to
combine data sources – corroborate a sensor’s indication with physical inspection or with
other sensors (temperature rise might confirm a vibration issue). This cross-validation helps
ensure we act on real issues, not sensor anomalies.
When data is incomplete in terms of failure history, reliability engineers use techniques to
account for it. One common approach is treating unknown outcomes as censored data in
statistical analysis. For instance, if some machines haven’t failed yet, their runtime is “right
censored” (we know it’s at least X hours without failure). Statistical reliability models (like
Weibull analysis) can include both failed and censored data to better estimate failure
distributions.
Tools like Weibull analysis are valuable – they can analyze life data to determine if failures
follow a certain distribution (exponential, Weibull, etc.), even with limited samples. Weibull
analysis explicitly accommodates censored data: e.g., a packaging line motor that has run 5
years without failure contributes to the analysis by showing that many units survive at least
that long. This prevents underestimation of MTBF due to simply averaging only the ones that
did fail. Using these methods, one can predict probabilities of failure over time with more
accuracy and set maintenance intervals accordingly. For example, a Weibull shape parameter
>1 would indicate wear-out behavior (failure rate increasing with time), suggesting a finite
optimal replacement age for a component.
Best Practices for Statistical Analysis:
Clean and Prepare Data: Before doing any number crunching, ensure the dataset is cleaned.
Remove or correct obvious errors (like duplicate entries or impossible dates), normalize
terminology, and fill in missing data where possible (perhaps by consulting maintenance logs
or technician notes). A best practice framework recommends regularly reviewing and
updating data sources and using technology (sensors/CMMS) to enhance data accuracy.
Classify Failures and Downtime: Not all downtime is equal. It’s useful to categorize events
(e.g. breakdowns vs planned stops vs minor stoppages) and analyze them separately. This
way, statistical analysis (like calculating MTBF) can focus on true failures. Ensure that what
you count as a “failure” is consistent (define criteria for an event to be included).
Use the Right Statistical Tools:
For reliability of components, use life data analysis (Weibull, exponential, log-normal as
appropriate). For example, if you gather times-to-failure for 50 motors (with some not failed
yet), use a Weibull probability plot or maximum likelihood estimation to fit a model that can
handle censored data. This yields insights like “90% of motors last at least 4 years” or an
expected failure rate curve, which guides replacement strategies.
For analyzing maintenance performance metrics (like comparing MTTR before and after a
process change), use descriptive stats and hypothesis tests if needed. E.g., if average MTTR
dropped from 3 hours to 2 hours after training, a statistical test can confirm if the
improvement is significant or due to chance.
For condition-monitoring data, time-series analysis is key. Use control charts or trend charts
to see when a metric is statistically out of control. Statistical Process Control (SPC)
methods help identify a real drift in equipment condition amid noise.
If data is sparse, consider aggregating similar equipment to get meaningful statistics. For
instance, if each filler has few failures, analyze all fillers together (if they’re identical) to
discern patterns.
Embrace CMMS and Analytics Tools: A modern CMMS or data analytics platform can
automate a lot of statistical analysis. They can compute KPIs, generate trends, and even apply
machine learning to predict failures. Ensuring that Laverton and Abbotsford record data in a
central system would allow easier analysis across a larger dataset (improving statistical
confidence). The data should be used to generate regular reports – e.g., monthly reliability
report highlighting MTBF/MTTR trends and major Downtime causes (Pareto analysis to
focus on the “vital few” issues, as 20% of causes might create 80% of downtime).
Account for Noisy/Incomplete Data in Conclusions: Always interpret results with an
understanding of data limitations. For example, if reliability analysis shows an increasing
failure rate, confirm that with the maintenance team’s observations. If some data was
missing (say one packaging line was offline for a rebuild during part of the year – affecting
failure counts), adjust the analysis or at least annotate it. Essentially, accompany statistical
findings with practical context.
Iteratively Refine Data Collection: The analysis will often highlight data gaps. If you find
you can’t calculate a meaningful MTBF for a critical machine because failures aren’t
recorded consistently, that’s a cue to improve the logging process. Or if a predicted failure
didn’t materialize because the sensor data was noisy, improve the sensor setup.
In essence, data-driven maintenance involves continuously improving both the data and the
analysis. A common adage is “garbage in, garbage out” – good decisions require good data.
Companies should invest in training personnel on data entry, possibly dedicate reliability
engineers or data analysts to comb and interpret data and use statistical tools appropriate for
maintenance (Weibull analysis for failure forecasting, regression analysis for identifying
stress factors, etc.). By doing so, even a noisy dataset can be transformed into actionable
insights. As a result, Asahi can better pinpoint problem areas, optimize PM schedules (maybe
extend intervals for some over-maintained assets, and shorten for under-maintained ones),
and quantify the impact of improvements (using stats to verify that “after we implemented X,
failures reduced by Y% with Z confidence”).
Industry Best Practices and Recommendations
To elevate maintenance performance, it’s useful to benchmark against industry best practices
and learn from others’ successes. In the global beverage and food manufacturing sector,
certain practices and metrics are considered “world class”:
Benchmarking Maintenance Practices: One key benchmark is the ratio of planned vs
unplanned maintenance. World-class operations aim for around 70-80% of maintenance work
to be planned (preventive/predictive) and no more than 20-30% reactive. Achieving this
indicates a proactive culture.
Another benchmark is OEE – top beverage bottlers often run packaging lines at 85% OEE
or higher by minimizing downtime and speed losses. In terms of labor, a common metric is
maintenance cost as a percent of replacement asset value (mentioned earlier, often ~2-3% per
annum in efficient plants for core processing, though packaging might be higher due to
intensive use). Mean downtime per event and schedule compliance (planned work executed
as scheduled >90%) are also tracked. Best-in-class plants also maintain a low inventory of
critical spare parts by relying on condition monitoring and supplier partnerships to provide
parts just in time – this requires trust in predictive maintenance.
Preventive/Predictive Maintenance Programs: Globally, many beverage companies have
adopted Total Productive Maintenance (TPM) or similar frameworks. TPM, originating in
Japan, involves operators in basic care (autonomous maintenance) and focuses on eliminating
the “Six Big Losses” (equipment failures, setup losses, small stops, reduced speed, defects,
and startup losses). Embracing TPM can dramatically improve reliability and OEE by
ensuring everyone (from operators to managers) is working to identify and fix issues
proactively. For example, some breweries train packaging line operators to perform daily
checks and basic maintenance (lubrication, tightening, cleaning). This not only prevents
issues but also frees up maintenance technicians for more advanced tasks. Global leaders also
implement Reliability-Centered Maintenance (RCM) – a structured process to determine the
most effective maintenance strategy for each asset based on failure modes and effects. RCM
ensures a balanced approach (not over-maintaining trivial parts or under-maintaining critical
ones).
Case Studies of Optimized Programs: There are numerous case studies from similar
industries that highlight the benefits of optimized maintenance:
A food processing plant (Simmons Foods feed mill) cut downtime by 50-60% and increased
planned maintenance to ~75-80% of all jobs by adopting better planning and predictive
techniques. They leveraged technologies and rigorous planning to move from reactive fixes to
a mostly predictive state, using tools like Failure Mode and Effects Analysis (FMEA) and
root cause analysis to pre-empt issues. This shows the scale of improvement possible –
downtime cut in half is a massive productivity gain.
A major brewery case (MillerCoors) reported that by improving maintenance planning and
scheduling (“planning prowess”), they dramatically increased completed PM work, which in
turn raised equipment availability and uptime and lowered maintenance costs. In other words,
investing in proper planning and execution of maintenance tasks paid off in more production
and less cost – a win-win.
An automotive example, though outside beverage, is instructive: a BMW plant achieved near
100% uptime in some critical areas and drove reactive work down to <5% of maintenance
tasksby a combination of predictive maintenance and long-term planning. While a brewery
might not reach car-manufacturing levels of uptime, it shows that with sustained effort,
unplanned events can be nearly eradicated.
Another case study from an aluminum plant (Alcoa) linked $2.4 million in savings in one
year to OEE improvements from reliability initiatives – quantifying the financial impact of
maintenance excellence.
In the beverage packaging machinery world, companies that have implemented IIoT
(Industrial Internet of Things) sensors and real-time monitoring have seen significant
reductions in downtime. For example, Coca-Cola bottlers have reported trials of predictive
analytics that prevented filler failures, saving hours of downtime. Allied Reliability (a
consulting firm) notes that bottling companies transitioning to new monitoring tech can
mitigate labor shortages and catch failures early, which is crucial given high demand and
tight labor in the industry.
Global Standards and Frameworks: Many best practices align with standards like ISO
55001 Asset Management – which provides a framework for managing assets, including
maintenance, with a lifecycle perspective and continuous improvement. Additionally,
benchmarking networks (like those by the Manufacturing Asset Management Council or
similar bodies) allow sites to compare KPI values against industry peers. Asahi could
benchmark Laverton and Abbotsford’s metrics (MTBF, OEE, maintenance cost%) against
other breweries or beverage plants worldwide to identify gaps. For instance, if global peer
data shows average packaging line MTBF is 20 hours and their line is 15 hours, that’s a gap
to close.
Quantifiable Benefits of Best Practices: Implementing best practices yields tangible
improvements: reduced downtime (often 20-50% reduction as seen in case studies), longer
equipment life (due to less stress from failures), and lower maintenance costs per unit.
Increased planned maintenance ratio means fewer frantic emergency fixes – which typically
cost 3-5 times more than planned ones. Improved reliability also improves safety (fewer
sudden breakdowns means fewer chances for accidents under pressure) and quality (machines
running in optimal condition produce more consistent quality product, with fewer spills or
defective packages). The capacity gain can be huge – if a packaging line’s OEE rises from
say 60% to 75%, that’s like adding an extra 25% capacity without new equipment. In a
competitive beverage market, this can translate to millions of additional bottles/cans per year.
To illustrate, one case study noted a site moving to a predictive maintenance state was able to
plan ~75-80% of jobs and remain mostly proactive. Another noted that reactive work in some
top plants is <10% of workload. These numbers show what is achievable. Financially, many
companies report maintenance best practices give a return on investment (ROI) through
avoided downtime. As a concrete example, after reliability improvements, Simmons Foods
could avoid having a second downtime shift, effectively increasing output without extra
labor.
Furthermore, best practices like root cause analysis (RCA) on failures ensure that problems
don’t recur – eliminating chronic issues can by itself boost uptime significantly (solving one
frequent problem can raise OEE a few points). Plants that rigorously apply RCA and
continuous improvement see a compounding effect: each solved issue frees capacity to solve
the next. Over a few years, this can transform performance.
Recommendations for Asahi: Benchmark against these standards and cases, Asahi should
target: at least ~80% planned maintenance, OEE > 85% on packaging lines, <10% unplanned
downtime, and maintenance cost per unit trending down annually. Adopting proven
approaches like TPM (engaging operators in maintenance) and leveraging predictive tools
(vibration, thermal monitoring on critical machines) will help reach these targets. Emulating
the planning discipline of MillerCoors or the predictive focus of leading companies will
position Laverton and Abbotsford among best-in-class operations.
Implementation Plan for Asahi Beverages
To transition Laverton and Abbotsford towards standardized, optimized maintenance
practices, a clear implementation roadmap is needed. Below is a step-by-step plan:
1. Assess Current State and Set Goals: Begin with a thorough audit of existing maintenance
practices at both sites. Document what maintenance is done (schedules, procedures), current
performance metrics (MTBF, downtime hours, backlog, etc.), and pain points as reported by
staff. This baseline will highlight gaps. Concurrently, establish and prioritize maintenance
goals aligned with business objectives. For example, goals might include “Reduce unplanned
downtime by 30% within 12 months” or “Achieve OEE of 85% on can line by next year” or
“Standardize PM compliance to 95% at both sites.” These goals give direction and a way to
measure success.
2. Create KPIs and Measurement System: As the audit is done, ensure the right KPIs are
defined and can be tracked for both sites (the ones discussed: MTBF, MTTR, OEE, planned
maintenance %, cost metrics, etc.). Implement a unified system for recording downtime and
maintenance across sites – ideally upgrading to a common CMMS if not already in place.
Commit to measuring and reporting these KPIs regularly. This might involve some training
on data entry and reporting (so that Laverton and Abbotsford data is apples-to-apples). If data
collection was a weakness, invest in fixing it early (e.g., add sensors, automate data capture
where possible, or simply enforce logging discipline). Essentially, put the infrastructure in
place so that improvements (or issues) can be quantitatively tracked.
3. Obtain Stakeholder Buy-In: Engage stakeholders at all levels – plant managers,
production supervisors, maintenance teams, and even operators. Communicate the vision of
moving to best-in-class maintenance and how it benefits everyone (higher reliability helps
production meet targets, reduces firefighting stress on maintenance, etc.). Getting
management support is critical; ensure site leaders endorse the changes and allocate
necessary resources. Similarly, involve the frontline: maintenance techs and operators often
know the equipment best, and their buy-in is needed for things like TPM or new procedures.
As one guide advises, getting stakeholder buy-in is a key early step to success. You might
form a cross-site “maintenance excellence team” including representatives from both
Laverton and Abbotsford to foster a shared commitment. This team can meet regularly to
oversee the implementation.
4. Leverage the Right Technology: Standardize on tools that will support optimized
maintenance. This could mean rolling out a unified CMMS platform between the two sites if
not already in place, so scheduling and tracking are consistent. Also, consider technology for
condition monitoring – e.g., install vibration sensors on critical motors, use thermal cameras
for electrical panels, etc., particularly at Abbotsford where older equipment might need
additional monitoring. Evaluate if the packaging lines have built-in diagnostics that can be
integrated. The goal is to have a technology backbone that enables condition-based
maintenance and easy reporting. In short, equip the maintenance team with modern tools
(tablets for work orders, centralized maintenance knowledge base, etc.). As best practice,
using the right software/technology makes execution much easier.
5. Develop Standardized Maintenance Procedures: Create or update maintenance SOPs
(Standard Operating Procedures) to reflect best practices and make them uniform across sites.
For example, if Laverton has a great preventive task list for the Krones filler, ensure
Abbotsford’s similar filler uses that list too. Conversely, if Abbotsford’s team has an excellent
lubrication route, adopt it at Laverton. Standardize maintenance planning processes – how
PM schedules are set, how work orders are prioritized. This might involve adopting a
common maintenance strategy framework like RCM or TPM: perform RCM analysis on
major equipment to decide maintenance tactics, and implement TPM pillars (autonomous
maintenance, planned maintenance, etc.). Document these procedures and train teams on
them. The idea is that a maintenance technician from one site could work at the other and find
a similar approach. Standardization also extends to spare parts management – create a
centralized or at least coordinated spares inventory system to reduce duplication and ensure
critical spares are available (perhaps sharing spares for common equipment between sites).
6. Training and Culture Change: Begin training programs to fill any skill gaps. If moving
towards condition-based maintenance, train staff on vibration analysis basics or how to
interpret sensor data. Train operators on doing basic maintenance checks (a TPM practice) –
e.g., how to clean and inspect their machine daily and recognize early warning signs.
Emphasize safety and quality during these trainings – proper maintenance improves both.
Cultivating a reliability culture is important: encourage reporting of issues, root cause
thinking, and celebrating improvements. For instance, hold monthly meetings where
maintenance and production discuss downtime incidents openly and identify root causes and
solutions (no blame, just learning). As the FoodSafetyTech article noted, maintenance staff
must be well-trained and documentation maintained – this implementation will require
investing in both.
7. Pilot and Rollout: It can help to pilot new practices on one line or area before full rollout.
For example, pick one packaging line at Laverton to fully implement the new maintenance
regime (enhanced PM, added sensors, TPM with operators) and monitor results for a few
months. This pilot can iron out kinks and also serve as a showcase. Similarly, maybe pilot at
Abbotsford on one bottling line. Use lessons learned to refine the processes, then scale up to
all lines at both sites. Staggering the rollout ensures the maintenance team isn’t overwhelmed
and can focus resources. Quick wins should be pursued early – tackle a known chronic
problem with a focused effort (maybe that labeler that always causes downtime) to show
immediate benefits of the new approach.
8. Align Performance Targets and Review Progress: Set common performance targets for
both sites based on the defined KPIs (e.g., “Abbotsford and Laverton will both achieve >500
hours MTBF on packaging lines” or “<5% downtime due to maintenance issues”). Regularly
review these metrics – possibly through a dashboard or monthly report. If targets are missed,
use problem-solving techniques (root cause analysis, 5-why, etc.) to find out why and adjust.
Encourage a bit of friendly competition and collaboration: for instance, if Laverton achieves a
better KPI, have them share how; if Abbotsford solves an issue, let Laverton learn from it.
Periodic cross-site audits can be done, where engineers from one site audit the other’s
maintenance practices against the standards, to ensure adherence and pick up improvement
ideas.
9. Continuous Improvement and Best Practice Sharing: Once the standardized program is
in place, maintain a cycle of continuous improvement. Use the data from KPIs to identify
further improvement opportunities (maybe MTBF is still lower than desired for a certain
machine – do a deeper analysis or an RCM study on it). Also, keep benchmarking externally:
attend industry conferences, engage with other beverage companies or suppliers to learn
about new maintenance technologies (for example, perhaps AI-based predictive analytics or
advanced robotics for maintenance tasks). Adapt and update the maintenance strategy as
technology and company needs evolve. Create channels for the maintenance teams at both
sites to share insights – for example, a shared online forum or quarterly joint workshop. Over
time, this helps in developing in-house “best practices” documentation that codifies what
works best for Asahi’s operations.
Timeline Considerations: The transition should be mapped over, say, 12-24 months. In the
first 3 months, do the assessment and design the new program. By 6 months, have the KPI
tracking in place and some quick wins (e.g., critical PM tasks executed, worst bottleneck
fixed). 6-12 months mark: pilots completed, major training done, and start of full rollout. By
12-18 months, the new practices should be in effect across both sites, and improving trends in
reliability KPIs should be evident. Full cultural change might take longer, but by 24 months
the expectation is both sites consistently hit the maintenance performance targets set at the
start.
Implementing these steps, Asahi can move from the current state to a unified, optimized
maintenance regime. Key to success will be management commitment, transparency of
results, and keeping the focus on how maintenance improvements contribute to broader
business success (like meeting production goals and lowering costs). By following this
roadmap – establishing goals/KPIs, securing buy-in, standardizing processes, leveraging
technology, and iterating improvements – Laverton and Abbotsford can significantly boost
their equipment reliability and operational efficiency. The end result will be packaging lines
that run with fewer disruptions, maintenance teams that work smarter not harder, and a
company culture that values proactive care of its critical assets.