Jump to content

Derisking long term freight cost volatility with e2e, SKU-level visibility

    

E2E orchestration
JP Doggett

In the past few months, the cost of freight has hit sudden, all-time highs, rendering regular budget planning impossible. How can we get better visibility of costs to pre-empt and avoid additional expense?

 AGENDA 

Mitigating long term freight cost volatility with e2e, SKU-level visibility

  • What permanent changes are needed to address business risk?
  • What drives unexpected costs to your freight spend?
  • Blind spots: what's preventing or limiting visibility and how can these be eliminated?
  • Data sources: which sources are most reliable and practical for better visibility e.g. real-time satellite data or supplier updates?
  • Examples of data-driven strategies you can adopt to flatten cost volatility, driven by end-to-end, SKU-level visibility of your supply chain

 WHO FOR? 

Industry sectors: current practitioners from all sectors

Org. size (annual T/O): typically £50m+

Roles & remits: Heads of: Supply Chain, Freight, Logistics, Transport with a role in designing and implementing analytics capabilities

 ABOUT INTENT DISCUSSIONS 

  • All discussions are private, held under the Chatham House Rule and moderated by INTENT with approx. 6-8 participants for 45-90 mins of candid, interactive discussion (not a passive webinar)
  • Some discussions include subject matter experts from member-recommended INTENT Partners, others are exchanges of best practice, experiences and ideas among practitioner members only
  • Discussions are shaped by participants according to their interests and questions
  • We may adjust participation to avoid competitive sensitivities and ensure productive discussion

 WHEN? 

Thursday 10th June (14.00 BST / 15.00 CEST) for max. 90 minutes

Hosted by Intent

Expert guest: Ian Powell, Zencargo

 

Request to join

 

Interested but can't make the date? Email us and we'll update you about future discussions.

 

Ribble Case Study - Zencargo.pdf

  • Also...

    • JP Doggett
      By JP Doggett
      Output from a member discussion hosted on 22nd April by Aleem Bandali of o9 Solutions:
      What does next-gen omnichannel retailing look like?
      There are two main parts:
      The customer-facing aspect where the customer must feel that they are engaging with a single, seamless entity. A lot of work has gone into this aspect already; The back end infrastructure which, if anything, is more important as it should keep the brand's promise and this is where a lot of work still needs to be done. The key is not thinking about supply chain in isolation but everything - planning, commercial, finance - all need to be connected in real time so that, at any point in time, it is possible to quickly answer questions like, 'can I support this increase in demand?', for example. This means forecasts and allocations need to be determined channel by channel, not just on an overarching basis. What is the technology that underpins this seamless information flow?
      It can be described as a 'digital brain' that senses all parts of the organisation and, like a brain, uses machine learning to continuously improve its understanding of the environment in which it is operating. It is also referred to as a digital twin which is based on graph and cube modelling of every node and edge of your supply chain network to create a precise digital carbon copy. Using business rules based predominantly on large volumes of transactional data, machine learning detects patterns and so can alert when deviations from what is planned or implied by the business rules occur. 
      This also underpins a control tower where dashboards of end-to-end operations identify where there are risks of disruption, often using real-time IoT data. It is possible to drill down into very granular levels to understand precisely which shipments are at risk and which SKUs, DCs and stores will be impacted. It is then possible to model potential solutions to the disruption with an understanding of operational and financial implications.
      How machine learning can improve forecasting accuracy across channels with leading demand indicators
      Forecast accuracy: allowing for different tolerances and categories, the typical scenario has been for 10-20% of SKUs to fall outside the forecast accuracy tolerance limit. This rate often doubled during the pandemic, generally not including availability-driven deviations; Machine learning impact: across a wider sample of retail businesses who have implemented machine learning, the typical change has been from 40-60% accuracy using traditional methods to 70-90% accuracy with machine learning used to better understand drivers and lead demand indicators; Determining drivers:  a common approach would be to take up to 200 potential drivers from 1000+ databases and overlay this with historical data so that the machine learning algorithms detect which drivers are most significant; to predict in-store demand as lockdowns ease, for example, high frequency data like Google Map data of traffic to retail outlets  by date, time and postcode can be overlaid with other external data like the number of Covid cases by postcode, the level of restrictions and other data to build predictive models at granular and aggregate levels. This showed key differences between countries and regions where people responded slightly differently to their local conditions but enough to have a material impact on demand; for longer lead-time scenarios of 10+ weeks, for example, it may be that a different combinations of drivers and data points give better predictive power; Eliminating blind spots: critically, this approach reduces the reliance on tacit knowledge and guesswork: around 70% of drivers were already identified and being used but around 30% were different or given a different weighting in the predictive models which contributed to the improvement in forecast accuracy; Inside the ‘black box’: for businesses that have their own data science teams, the open source platform means that proprietary algorithms can be combined with the out-of-the-box algorithms to further improve accuracy. It is also important to ensure the ML platform highlights the contribution of each different driver, turning the black box of AI transparent to increase trust in the output, in turn increasing adoption.
      End-to-end omnichannel planning with a singular integrated view on merchandising and supply chain across all channels
      Channel shift: hard to know for sure how much of the pandemic-driven shift will stick but confident that it does constitute a paradigm shift and so prompts quite fundamental questions about the purpose of a store, where to locate and how to optimise inventory across the network for robust available-to-promise implications for cost-to-serve and so on; Integrated views & closing gaps: an accurate forecast is only half the battle...you also have to be able to execute. This is where a digital twin (a digital representation of each and every node in a supply chain, including external suppliers) comes into play because it is then possible to model scenarios and impacts across all channels and understand what needs to happen on execution to close the gap to plan and forecast; Stores as micro fulfilment centres: a digital twin approach also helps to move past the distribution network more flexibly so that bricks and mortar stores can become fulfilment centres as conditions demand; What went wrong?: there can be a gap on understanding lost sales if, for example, they’re compensated for from other parts of the forecast and business but a digital twin allows for ‘intelligent post game analytics’ to understand what went wrong, where, and why; Best practices for scaling operations automation over time
      Enterprise buy-in: efforts to improve forecast accuracy and close the gap between plan and execution will be hampered if functions like finance, commercial and supply chain each have their own takes on what is and should be happening. A platform approach with a digital twin at the centre that is able to evaluate multiple drivers from data streams allows teams to express and test their views of the world to aid and improve mutual understanding and decision making; Crawl, walk, run: best practice is to start small, for example, by pulling in a few different drivers or testing the explanatory power of existing drivers on a group of categories. You can then move onto testing more drivers for more sophisticated models and then get into fully automating the processes so that less time is spent on mundane tasks and more on value-add initiatives.
    • JP Doggett
      By JP Doggett
      Although there continue to be many challenges in arriving at a robust plan in the first place, this discussion focused on how to close the gap with execution or, in other words, reduce time lag or latency of deviations from plan being predicted and detected to being successfully managed.
      Latency definition: how quickly can you predict something is going to happen? How quickly can it be filtered in the hierarchy? It’s fundamental to how the SC and finance are organised. Latency is a combination of data availability and process

       
      Latency -1 (predictive)
      Scenario planning is a key way to decrease latency and even predict demand.  Demand Sensing is another key part of the solution: in retail this can be social sensing. Social sensing gives you a picture what’s happening amongst consumers that is about to affect demand. In a manufacturing context this could be about introducing sensors to monitor real time consumption. Or in manufacturing, it can be about getting information on supply chain disruption, eg issues at the ports.  Latency +1 (responsive)
      The pandemic has pushed businesses towards short term decision making. But it’s the mid term (S&OP) monthly horizon will influence the profitability. Translating from demand forecast to ordering requires a reduction or elimination of silos. Latency is most often caused by layers of hierarchy. Siloed decision making is a very common challenge. Large businesses are often regionally siloed too - hence the recognition of need for a centralised decision making unit. 
      Be aware that the flip side of empowerment can be regionally made decisions that affect other parts of the global business.
      Latency challenges/causes
      Slower decision making is exacerbated when finance does not trust the IBP number as much as its own financial forecast. There is a growing convergence of finance into supply chain; often the CFO is the ‘co-pilot’ to the CEO. 
      Suggested approaches to IBP and reducing latency
      A ‘whole organisation’ approach: IBP cannot just be a supply chain project - it must be integral and understood by all functions. The more senior the sponsorship, the better the chance of success.  It’s important to maintain your customer promise: supply chain is therefore a part of that. Having a centre of excellence can be a good way to tie it all in. The COE should be made up of people outside of operational roles in order to be truly effective.  Adopting a design thinking approach to customer lifecycle management is a useful approach - this can help other functions better understand the value of supply chain in the context of the customer promise.  Cloud technology reduces latency by having fewer technology anachronisms - applications are up to date, and are automatically maintained. This reduces lag.  IBP requires a business case. To do this, it can be valuable to look at what will happen if nothing is done, what are the costs to inaction.  Do not overfocus on a single instance platform across a large organization. There can be multiple clouds, and technology can interface. Organisational structure & people emerging best practices
      Of course, technology is not the only lever available for closing the planning - execution gap: organisation structure and process design can either hinder or help the flow of critical information and the capability for prompt, informed decisions to be taken.
      High performing organisations often demonstrate a non-siloed structure whereby, instead of a traditional SCOR-based model, the focus is on end-to-end processes like IBP, O2C and increasingly omnichannel and process owners who are responsible for holistic optimisation. Critical areas for best-in-class cross-functional alignment include product / service innovation, fulfilment & aftercare and planning. Increasingly these teams have business partners who, for example, have both a deep, systematic understanding of supply chain operations and financial control to bridge those potential silos. These are often supported by Centres of Excellence, particularly for analytics which major on optimising segmentation, cost-to-serve and customer behaviour. For planners in particular, it pays dividends not to confuse planning and execution and recognise that scheduling is not the same as planning as the latter requires particular skill sets around cross-functional communication in particular. Planners are likely to be more effective if they think and talk like business owners in terms of customer experience and profitability rather than a narrow focus on, for example, improving OTIF scores by a couple of points. 210414 IBPX Oracle Intent roundtable v1 (1).pdf
    • JP Doggett
      By JP Doggett
      Summary of a virtual boardroom hosted by Bryan Harris from PredictHQ on How in-person events can signal demand recovery 
      Context & current approach to demand sensing

      The three-stage maturity model above allowed us to assess at what stage participants were at with regard to demand sensing with all being somewhere to the left of the centre of this spectrum. Typically, this included:
      a blend of conventional forecasting (using ePOS and historical data) and demand sensing methods where more external data sources are being included; sometimes, a ‘tsunami’ of data but challenges in distinguishing between signal and noise; challenges in recalibrating the connection between lead demand indicators and consumer behaviour as this has been altered by the pandemic and is still a moving target. For example, whether the trend for home-based consumption displacing other channels is long-term or will eventually revert to pre-pandemic levels?; similarly, the impact of promotions is harder to forecast and decompose into what’s actually driving any uptick - the promotion or other coincidental factors?; a unanimous desire to further explore and improve demand sensing capabilities How in-attendance events can be harnessed to improve forecast accuracy
      Most demand sensing data sources will incorporate publicly available data like public holidays and the weather but in-attendance / ticketed events take this to the next level to offer more granular and location-specific insights on likely demand patterns globally, regionally, nationally and even locally; Typically, an initial pilot project evaluates the correlation between event types and demand, either with an in-house team or in conjunction with an outsourced analytics provider as these will be unique to each organisation, product family, SKUs and markets; This has two main outputs: greater clarity in understanding the drivers of demand which are already captured in seasonality adjustments and improved forecast accuracy usually between 0.2% - 2% as determined by the organisation’s existing accuracy metrics; As more data and forecasts are done, the specific event types and properties which have the most impact on demand become clearer; This is also being extended to non-ticketed events such as TV sports broadcasts which will have different implications for demand depending on multiple location-based factors; For unscheduled and inherently unpredictable events, it is possible to better grasp the impacts as and when they occur.
        Bryan will be hosting a series of roundtables on demand sensing at our London Member Meeting on 14 September.
    • JP Doggett
      By JP Doggett
      Summary of a discussion hosted on 20th May with Intent members and guest experts Richard Thompson and Dan Levy of CadDo.
      Market context and typical challenges
      A typical CTS measures logistics, but not the full e2e picture - the ‘field to fork’ measure. This can become very complex very quickly, but more companies are now moving towards measuring this. Why? These calculations can then drive commercial teams to price deals, better inform the budgeting process, inform how customers should be serviced most profitably, and can also feed into ESG reporting. 
      For many, Covid has brought to the fore the need to consider cost-to-serve vs operational risk of depending on import/export. Where products are manufactured or sourced is being reviewed. Use of CTS tends to be unevenly distributed across parts of the business, typically where costs are high or increasing, for example, in shipping and transport costs at the moment. The Amazon effect where customers expect next day delivery is also driving up transport costs so 'traditional CTS' tends to revolve around the logistics function. When data is available, it's not necessarily complete or transparent and the results are sub-optimal, where a conclusion isn't always obvious. CTS is less commonly used to drive routine discussions and decision-making between functions. CTS does not always drive pricing to customer - yet it can and perhaps should. Looking at CTS further up the supply chain quickly becomes complex. Without data in a usable form readily at hand, the situation has often already changed by the time an analysis can be done. One-off CTS analyses typically only provide a snapshot which, while interesting and possibly informative, rarely becomes actionable as an ongoing tool which limits their practical value. As greater emphasis is placed on supply chain resilience, there is greater need to be able to model the implications of possible configurations of footprints, supplier terms etc. and understand it's impact on the bottom line. Advice on conducting a CTS analysis
      COST-to-serve is somewhat of a misnomer as it can overly narrow the focus on the cost side whereas framing it as something like 'cost-to-serve contribution', Holistic Customer Investment or taking an EBIT-level view offers better perspective coupled with and end-to-end view. Incorporate data from other parts of the business in whatever format it is recorded (don’t try to introduce or change data format as this will surely be impeded). Start with the invoice data as a base and go to each function in your organisation to see what data can be pulled to augment this?  Set the CTS rules and get buy-in to allocation methodology. You will find that the warehouse is the most complex area. Ensure CTS is not being used to ‘beat up’ one function. Transparency of CTS data helps defuse a sometimes adversarial dynamic where one function or teams' view is pitted against another, acting as a glue rather than a wedge. This transparency itself often prompts more and better questions which are enriched by different viewpoints. Factor in FTE overhead - e.g. how many planners, how much devotion in customer service? Those FTEs are most often allocated to customers, or geo’s, or products, so these costs can be attributed. Establish the wins for each function and you’ll get the buy-in! How does better CTS insight help each function perform its role better and contribute to the common goal e.g. improving EBIT? Consider extending the potential “wins” to include suppliers who may need to be motivated to support upstream enhancements (open book discussions). This could be particularly significant in the ESG expectations of suppliers. CTS can start in supply chain but often end up in commercial where it is used as a framework for productive negotiation and to drive deals and offers. Sharing this information with customers can help influence how they accept being served, or nudge buying practices in a positive way. Make CTS a living breathing exercise - take data feeds once a month. A one-off exercise tends to land and be ignored because there is no change or measurement for improvement. CTS analysis can then be used as a data feed for other changes, projects or servicing decisions.  Incorporating ESG metrics into CTS analysis
       The process is essentially no different to a standard CTS analysis: start with the data and a transparent rules engine to produce regular routine reports. In time, the rules and reports can be adjusted and fine-tuned and become embedded in the decision-making process.
×
×
  • Create New...