SlideShare a Scribd company logo
Confidential, Dynatrace LLC
Testing a Moving Target
How Do We Test Machine Learning Systems?
Peter Varhol, Dynatrace LLC
About me
• International speaker and writer
• Degrees in Math, CS, Psychology
• Evangelist at Dynatrace
• Former university professor, tech
journalist
• What kind of systems produce nondeterministic results
• Why we can’t test these systems using traditional techniques
• How we can assess, measure, and communicate quality with
learning and adaptive systems
Confidential, Dynatrace LLC
What You Will Learn
Agenda
• What are machine learning and adaptive systems?
• How are these systems evaluated?
• Challenges in testing these systems
• What constitutes a bug?
• Summary and conclusions
We Think We Know Testing
• We test deterministic systems
• For a given input, the output is always the same
• And we know what the output is supposed to be
• If the output is something else
• We may have a bug
• We know nothing
Machine Learning and Adaptive Systems
• We are now building a different kind of software
• It never returns the same result
• That doesn’t make it wrong
• How can we assess the quality?
• How do we know if there is a bug?
• The problem domain is ambiguous
• There is no single “right” answer
• “Close enough” is good
• We don’t know quite why the software
responds as it does
• We can’t easily trace code paths
Confidential, Dynatrace LLC
How Does This Happen?
What Technologies Are Involved?
• Neural networks
• Genetic algorithms
• Rules engines
• Feedback mechanisms
• Sometimes hardware
Neural Networks
• Set of layered algorithms whose variables can be
adjusted via a learning process
• The learning process involves training with known
inputs and outputs
• The algorithms adjust coefficients to converge on the
correct answer (or not)
• You freeze the algorithms and coefficients, and deploy
A Sample Neural Network
• Use the principle of natural selection
• Create a range of possible solutions
• Try out each of them
• Choose and combine two of the better
alternatives
• Rinse and repeat as necessary
Confidential, Dynatrace LLC
Genetic Algorithms
• Layers of if-then rules, with likelihoods associated
• With complex inputs, the results can be different
• Determining what rules/probabilities should be
changed is almost impossible
• How do we measure quality?
Confidential, Dynatrace LLC
Rules Engines
• Transportation
• Self-driving cars
• Aircraft
• Ecommerce
• Recommendation engines
• Finance
• Stock trading systems
Confidential, Dynatrace LLC
How Are These Systems Used?
• Electric wind sensor
• Determines wind speed and direction
• Based on the cooling of filaments
• Several hundred data points of known results
• Designed a three-layer neural network
• Then used the known data to train it
Confidential, Dynatrace LLC
A Practical Example
• Retail recommendation engines
• Other people bought this
• You may also be interested in that
• They don’t have to be perfect
• But they can bring in additional revenue
Confidential, Dynatrace LLC
Another Practical Example
Challenges to Validating Requirements
• What does it mean to be correct?
• The result will be different every time
• There is no one single right answer
• How will this really work in production?
• How do I test it at all?
• Only look at outputs for given inputs
• And set accuracy parameters
• Don’t look at the outputs at all
• Focus on performance/usability/other features
• We can’t test accuracy
• Throw up our hands and go home
Confidential, Dynatrace LLC
Possible Answers
Testing Machine Learning Systems
• Have objective acceptance criteria
• Test with new data
• Don’t count on all results being accurate
• Understand the architecture of the network as a part of
the testing process
• Communicate the level of confidence you have in the
results to management and users
What About Adaptive Systems?
• Adaptive systems are very similar to machine
learning
• The problems solved are slightly different
• Neural algorithms are used, and trained
• But the algorithms aren’t frozen in production
Machine Learning and Adaptive Systems
• These are two different things
• Machine learning systems get training, but are static
after deployment
• Adaptive systems continue to adapt in production
• They dynamically optimize
• They require feedback
• Airline pricing
• Ticket prices change three times a day based on demand
• It can cost less to go farther
• It can cost less later
• Ecommerce systems
• Recommendations try to discern what else you might want
• Can I incentivize you to fill up the plane?
Confidential, Dynatrace LLC
Adaptive Systems
• Brooks Ghost running shoes
• Versus ghost costumes
• We don’t take context into account
• But do they make money?
• Well, probably
Confidential, Dynatrace LLC
Recommendation Engines Can Be Very Wrong
Considerations for Testing Adaptive Systems
• You need test scenarios
• Best case, average case, and worst case
• You will not reach mathematical optimization
• Determine what level of outcomes are acceptable for each
scenario
• Defects will be reflected in the inability of the model to
achieve goals
What Does Being Correct Mean?
• Are we making money?
• Is the adaptive system more efficient?
• Are recommendations being picked up?
• Is it worthwhile to test recommendations?
• How would you score that?
• We have never tested these characteristics before
• Can we learn?
• How to we make quality recommendations?
• Consistency?
• Value?
• Does it matter?
Confidential, Dynatrace LLC
These Are Very Different Measures
• I will never encounter this type of application!
• You might be surprised
• I will do what I’ve always done
• Um, no you won’t
• My goals will be defined by others
• Unless they’re not
• You may be the one
Confidential, Dynatrace LLC
Objections
How Do We Test These Things?
• Multiple inputs at one time
• Inputs may be ambiguous or approximate
• The output may be different each time
• Testing accuracy is a fool’s game
• Past data
• We know how different pricing strategies turned out
• We made recommendations in the past
What is a Bug?
• A mismatch between inputs and outputs?
• It supposed to be that way!
• Not every recommendation will be a good one
• But that doesn’t mean it’s a bug
• Too many wrong answers
• Define too many
We Found a Bug, Now What?
• The bug could be unrelated to the neural network
• Treat it as a normal bug
• If the neural network is involved
• Determine a definition of inaccurate
• Determine the likelihood of an inaccurate answer
• This may involve serious redevelopment
• We have little experience with learning and adaptive
systems
• Requirements have to be very different
• We need to understand the difference between correct
and accurate
• We need objective requirements
• And the ability to measure them
• And the ability to communicate what they mean
Confidential, Dynatrace LLC
Conclusions
Thank You
Peter Varhol
Dynatrace LLC
peter.varhol@Dynatrace.com

More Related Content

PPTX
Testing for cognitive bias in ai systems
PDF
MLSEV Virtual. Supervised vs Unsupervised
PPTX
Using Machine Learning to Optimize DevOps Practices
PDF
MLSEV Virtual. Evaluations
PPTX
Not fair! testing ai bias and organizational values
PDF
MLSEV Virtual. State of the Art in ML
PDF
MLSEV Virtual. Automating Model Selection
PDF
Pdf analytics-and-witch-doctoring -why-executives-succumb-to-the-black-box-me...
Testing for cognitive bias in ai systems
MLSEV Virtual. Supervised vs Unsupervised
Using Machine Learning to Optimize DevOps Practices
MLSEV Virtual. Evaluations
Not fair! testing ai bias and organizational values
MLSEV Virtual. State of the Art in ML
MLSEV Virtual. Automating Model Selection
Pdf analytics-and-witch-doctoring -why-executives-succumb-to-the-black-box-me...

What's hot (20)

PPTX
Freeblade - A retrospective
PPTX
Overcoming Top 5 Misconceptions Predictive Analytics
PDF
RecSys 2016 Talk: Feature Selection For Human Recommenders
PDF
Things Could Get Worse: Ideas About Regression Testing
PDF
MLSEV Virtual. Predictions
PPTX
DCDNUG 10/16/2012 Automated testing obstacles pitfalls dangers
PPT
Root Cause Analysis | 5 whys | Tools of accident investigation I Gaurav Singh...
PPTX
GIAF UK Winter 2015 - Analytical techniques: A practical guide to answering b...
PPTX
How did i miss that bug rtc
PDF
MLSEV Virtual. Searching for Anomalies
PPTX
Overcoming the Obstacles, Pitfalls, and Dangers of Unit Testing
PPTX
Testing in the Wild
PDF
MLSEV Virtual. My first BigML Project
PDF
Lean DevOps - Lessons Learned from Innovation-driven Companies
PDF
MLSEV Virtual. Applying Topic Modelling to improve Operations
PPTX
Testing for everyone agile yorkshire
PDF
[AI series Talk #2] From PoC to Production - A Case Study
PPTX
Challenging Your Project’s Testing Mindsets - Joe DeMeyer
PPTX
Your Agile Leadership Journey: Leading People, Managing Paradoxes
PDF
[HCMC STC Jan 2015] Choosing The Best Of The Plan-Driven And Agile Developmen...
Freeblade - A retrospective
Overcoming Top 5 Misconceptions Predictive Analytics
RecSys 2016 Talk: Feature Selection For Human Recommenders
Things Could Get Worse: Ideas About Regression Testing
MLSEV Virtual. Predictions
DCDNUG 10/16/2012 Automated testing obstacles pitfalls dangers
Root Cause Analysis | 5 whys | Tools of accident investigation I Gaurav Singh...
GIAF UK Winter 2015 - Analytical techniques: A practical guide to answering b...
How did i miss that bug rtc
MLSEV Virtual. Searching for Anomalies
Overcoming the Obstacles, Pitfalls, and Dangers of Unit Testing
Testing in the Wild
MLSEV Virtual. My first BigML Project
Lean DevOps - Lessons Learned from Innovation-driven Companies
MLSEV Virtual. Applying Topic Modelling to improve Operations
Testing for everyone agile yorkshire
[AI series Talk #2] From PoC to Production - A Case Study
Challenging Your Project’s Testing Mindsets - Joe DeMeyer
Your Agile Leadership Journey: Leading People, Managing Paradoxes
[HCMC STC Jan 2015] Choosing The Best Of The Plan-Driven And Agile Developmen...
Ad

Similar to Testing a movingtarget_quest_dynatrace (20)

PDF
لموعد الإثنين 03 يناير 2022 143 مبادرة #تواصل_تطوير المحاضرة ال 143 من المباد...
PDF
Overview of machine learning
PPTX
AI-900 - Fundamental Principles of ML.pptx
PDF
Testing Machine Learning-enabled Systems: A Personal Perspective
PPTX
Machine Learning vs Decision Optimization comparison
PDF
Introduction to ML.pdf Supervised Learning, Unsupervised
PDF
CD in Machine Learning Systems
PDF
1. Demystifying ML.pdf
PDF
Choosing a Machine Learning technique to solve your need
PDF
Machine learning it is time...
PDF
Quant university MRM and machine learning
PPTX
rsec2a-2016-jheaton-morning
PDF
Algorithmic Impact Assessment: Fairness, Robustness and Explainability in Aut...
PDF
Lessons learned from building practical deep learning systems
PDF
Machine learning quality for production
PDF
DSCI 552 machine learning for data science
PPTX
Evaluating machine learning claims
PDF
10 more lessons learned from building Machine Learning systems - MLConf
PDF
Xavier Amatriain, VP of Engineering, Quora at MLconf SF - 11/13/15
PDF
10 more lessons learned from building Machine Learning systems
لموعد الإثنين 03 يناير 2022 143 مبادرة #تواصل_تطوير المحاضرة ال 143 من المباد...
Overview of machine learning
AI-900 - Fundamental Principles of ML.pptx
Testing Machine Learning-enabled Systems: A Personal Perspective
Machine Learning vs Decision Optimization comparison
Introduction to ML.pdf Supervised Learning, Unsupervised
CD in Machine Learning Systems
1. Demystifying ML.pdf
Choosing a Machine Learning technique to solve your need
Machine learning it is time...
Quant university MRM and machine learning
rsec2a-2016-jheaton-morning
Algorithmic Impact Assessment: Fairness, Robustness and Explainability in Aut...
Lessons learned from building practical deep learning systems
Machine learning quality for production
DSCI 552 machine learning for data science
Evaluating machine learning claims
10 more lessons learned from building Machine Learning systems - MLConf
Xavier Amatriain, VP of Engineering, Quora at MLconf SF - 11/13/15
10 more lessons learned from building Machine Learning systems
Ad

More from Peter Varhol (15)

PPTX
Not fair! testing AI bias and organizational values
PPTX
DevOps and the Impostor Syndrome
PPTX
162 the technologist of the future
PPTX
Correlation does not mean causation
PPTX
Digital transformation through devops dod indianapolis
PPTX
Making disaster routine
PPTX
What Aircrews Can Teach Testing Teams
PPTX
Identifying and measuring testing debt
PPTX
What aircrews can teach devops teams ignite
PPTX
Talking to people lightning
PPTX
Varhol oracle database_firewall_oct2011
PPTX
Qa test managed_code_varhol
PDF
Talking to people: the forgotten DevOps tool
PPTX
How do we fix testing
PPTX
Moneyball peter varhol_starwest2012
Not fair! testing AI bias and organizational values
DevOps and the Impostor Syndrome
162 the technologist of the future
Correlation does not mean causation
Digital transformation through devops dod indianapolis
Making disaster routine
What Aircrews Can Teach Testing Teams
Identifying and measuring testing debt
What aircrews can teach devops teams ignite
Talking to people lightning
Varhol oracle database_firewall_oct2011
Qa test managed_code_varhol
Talking to people: the forgotten DevOps tool
How do we fix testing
Moneyball peter varhol_starwest2012

Recently uploaded (20)

PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PPTX
cloud_computing_Infrastucture_as_cloud_p
PDF
A novel scalable deep ensemble learning framework for big data classification...
PDF
Getting Started with Data Integration: FME Form 101
PDF
NewMind AI Weekly Chronicles - August'25-Week II
PDF
DASA ADMISSION 2024_FirstRound_FirstRank_LastRank.pdf
PDF
A comparative study of natural language inference in Swahili using monolingua...
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PPTX
A Presentation on Artificial Intelligence
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PPTX
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
PDF
August Patch Tuesday
PDF
project resource management chapter-09.pdf
PDF
Enhancing emotion recognition model for a student engagement use case through...
PDF
DP Operators-handbook-extract for the Mautical Institute
PDF
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PDF
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
1 - Historical Antecedents, Social Consideration.pdf
Building Integrated photovoltaic BIPV_UPV.pdf
cloud_computing_Infrastucture_as_cloud_p
A novel scalable deep ensemble learning framework for big data classification...
Getting Started with Data Integration: FME Form 101
NewMind AI Weekly Chronicles - August'25-Week II
DASA ADMISSION 2024_FirstRound_FirstRank_LastRank.pdf
A comparative study of natural language inference in Swahili using monolingua...
Digital-Transformation-Roadmap-for-Companies.pptx
A Presentation on Artificial Intelligence
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
August Patch Tuesday
project resource management chapter-09.pdf
Enhancing emotion recognition model for a student engagement use case through...
DP Operators-handbook-extract for the Mautical Institute
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
gpt5_lecture_notes_comprehensive_20250812015547.pdf
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
1 - Historical Antecedents, Social Consideration.pdf

Testing a movingtarget_quest_dynatrace

  • 1. Confidential, Dynatrace LLC Testing a Moving Target How Do We Test Machine Learning Systems? Peter Varhol, Dynatrace LLC
  • 2. About me • International speaker and writer • Degrees in Math, CS, Psychology • Evangelist at Dynatrace • Former university professor, tech journalist
  • 3. • What kind of systems produce nondeterministic results • Why we can’t test these systems using traditional techniques • How we can assess, measure, and communicate quality with learning and adaptive systems Confidential, Dynatrace LLC What You Will Learn
  • 4. Agenda • What are machine learning and adaptive systems? • How are these systems evaluated? • Challenges in testing these systems • What constitutes a bug? • Summary and conclusions
  • 5. We Think We Know Testing • We test deterministic systems • For a given input, the output is always the same • And we know what the output is supposed to be • If the output is something else • We may have a bug • We know nothing
  • 6. Machine Learning and Adaptive Systems • We are now building a different kind of software • It never returns the same result • That doesn’t make it wrong • How can we assess the quality? • How do we know if there is a bug?
  • 7. • The problem domain is ambiguous • There is no single “right” answer • “Close enough” is good • We don’t know quite why the software responds as it does • We can’t easily trace code paths Confidential, Dynatrace LLC How Does This Happen?
  • 8. What Technologies Are Involved? • Neural networks • Genetic algorithms • Rules engines • Feedback mechanisms • Sometimes hardware
  • 9. Neural Networks • Set of layered algorithms whose variables can be adjusted via a learning process • The learning process involves training with known inputs and outputs • The algorithms adjust coefficients to converge on the correct answer (or not) • You freeze the algorithms and coefficients, and deploy
  • 10. A Sample Neural Network
  • 11. • Use the principle of natural selection • Create a range of possible solutions • Try out each of them • Choose and combine two of the better alternatives • Rinse and repeat as necessary Confidential, Dynatrace LLC Genetic Algorithms
  • 12. • Layers of if-then rules, with likelihoods associated • With complex inputs, the results can be different • Determining what rules/probabilities should be changed is almost impossible • How do we measure quality? Confidential, Dynatrace LLC Rules Engines
  • 13. • Transportation • Self-driving cars • Aircraft • Ecommerce • Recommendation engines • Finance • Stock trading systems Confidential, Dynatrace LLC How Are These Systems Used?
  • 14. • Electric wind sensor • Determines wind speed and direction • Based on the cooling of filaments • Several hundred data points of known results • Designed a three-layer neural network • Then used the known data to train it Confidential, Dynatrace LLC A Practical Example
  • 15. • Retail recommendation engines • Other people bought this • You may also be interested in that • They don’t have to be perfect • But they can bring in additional revenue Confidential, Dynatrace LLC Another Practical Example
  • 16. Challenges to Validating Requirements • What does it mean to be correct? • The result will be different every time • There is no one single right answer • How will this really work in production? • How do I test it at all?
  • 17. • Only look at outputs for given inputs • And set accuracy parameters • Don’t look at the outputs at all • Focus on performance/usability/other features • We can’t test accuracy • Throw up our hands and go home Confidential, Dynatrace LLC Possible Answers
  • 18. Testing Machine Learning Systems • Have objective acceptance criteria • Test with new data • Don’t count on all results being accurate • Understand the architecture of the network as a part of the testing process • Communicate the level of confidence you have in the results to management and users
  • 19. What About Adaptive Systems? • Adaptive systems are very similar to machine learning • The problems solved are slightly different • Neural algorithms are used, and trained • But the algorithms aren’t frozen in production
  • 20. Machine Learning and Adaptive Systems • These are two different things • Machine learning systems get training, but are static after deployment • Adaptive systems continue to adapt in production • They dynamically optimize • They require feedback
  • 21. • Airline pricing • Ticket prices change three times a day based on demand • It can cost less to go farther • It can cost less later • Ecommerce systems • Recommendations try to discern what else you might want • Can I incentivize you to fill up the plane? Confidential, Dynatrace LLC Adaptive Systems
  • 22. • Brooks Ghost running shoes • Versus ghost costumes • We don’t take context into account • But do they make money? • Well, probably Confidential, Dynatrace LLC Recommendation Engines Can Be Very Wrong
  • 23. Considerations for Testing Adaptive Systems • You need test scenarios • Best case, average case, and worst case • You will not reach mathematical optimization • Determine what level of outcomes are acceptable for each scenario • Defects will be reflected in the inability of the model to achieve goals
  • 24. What Does Being Correct Mean? • Are we making money? • Is the adaptive system more efficient? • Are recommendations being picked up? • Is it worthwhile to test recommendations? • How would you score that?
  • 25. • We have never tested these characteristics before • Can we learn? • How to we make quality recommendations? • Consistency? • Value? • Does it matter? Confidential, Dynatrace LLC These Are Very Different Measures
  • 26. • I will never encounter this type of application! • You might be surprised • I will do what I’ve always done • Um, no you won’t • My goals will be defined by others • Unless they’re not • You may be the one Confidential, Dynatrace LLC Objections
  • 27. How Do We Test These Things? • Multiple inputs at one time • Inputs may be ambiguous or approximate • The output may be different each time • Testing accuracy is a fool’s game • Past data • We know how different pricing strategies turned out • We made recommendations in the past
  • 28. What is a Bug? • A mismatch between inputs and outputs? • It supposed to be that way! • Not every recommendation will be a good one • But that doesn’t mean it’s a bug • Too many wrong answers • Define too many
  • 29. We Found a Bug, Now What? • The bug could be unrelated to the neural network • Treat it as a normal bug • If the neural network is involved • Determine a definition of inaccurate • Determine the likelihood of an inaccurate answer • This may involve serious redevelopment
  • 30. • We have little experience with learning and adaptive systems • Requirements have to be very different • We need to understand the difference between correct and accurate • We need objective requirements • And the ability to measure them • And the ability to communicate what they mean Confidential, Dynatrace LLC Conclusions

Editor's Notes

  • #6: Software in theory is a fairly straightforward activity. For every input, there is a defined and known output. We enter values, make selections, or navigate an application, and compare the actual result with the expected one. If they match, we nod and move on. If they don’t, we possibly have a bug. The point is that we already know what the output is supposed to be. Granted, sometimes an output is not well-defined, and there can be some ambiguity, and you get disagreements on whether or not a particular result represents a bug or something else.
  • #7: But there is a type of software where having a defined output is no longer the case. Actually, two types. One is machine learning systems. The second is predictive analytics, or adaptive systems.
  • #10: Most machine learning systems are based on neural networks. A neural network is a set of layered algorithms whose variables can be adjusted via a learning process. The learning process involves using known data inputs to create outputs that are then compared with known results. When the algorithms reflect the known results with the desired degree of accuracy, the algebraic coefficients are frozen and production code is generated. Today, this comprises much of what we understand as artificial intelligence.
  • #14: These types of software are becoming increasingly common, in areas such as ecommerce, public transportation, automotive, finance, and computer networks. They have the potential to make decisions given sufficiently well-defined inputs and goals. In some instances, they are characterized as artificial intelligence, in that they seemingly make decisions that were once the purview of a human user or operator. Decision augmentation Personal assistants
  • #15: Both of these types of systems have things in common. For one thing, neither produces an “exact” result. In fact, sometimes they can produce an obviously incorrect result. But they are extremely useful in a number of situations when data already exists on the relationship between recorded inputs and intended results. Let me give you an example. Years ago, I devised a neural network that worked as a part of an electronic wind sensor. This worked though the wind cooling the electronic sensor based on its precise decrease in temperature at specific speeds and directions. I built a neural network that had three layers of algebraic equations, each with four to five separate equations in individual nodes, computing in parallel. They would use starting coefficients, then adjust those coefficients based on a comparison between the algorithmic output and the actual answer. I then trained it. I had over 500 data points regarding known wind speed and direction, and the extent to which the sensor cooled. The network I created passed each input into its equations, through the multiple layers, and produced an answer. At first, the answer from the network probably wasn’t that close to the known correct answer. But the algorithm was able to adjust itself based on the actual answer. After multiple iterations with the training data, the coefficients should gradually hone in on accurate and consistent results.
  • #17: How do you test this? You do know what the answer is supposed to be, because you built the network using the test data, but it will be rare to get an exactly correct answer all of the time. The product is actually tested during the training process. Training either brings convergence to accurate results, or it diverges. The question is how you evaluate the quality of the network.
  • #19: Have objective acceptance criteria. Know the amount of error you and your users are willing to accept. Test with new data. Once you’ve trained the network and frozen the architecture and coefficients, use fresh inputs and outputs to verify its accuracy. Don’t count on all results being accurate. That’s just the nature of the beast. And you may have to recommend throwing out the entire network architecture and starting over. Understand the architecture of the network as a part of the testing process. Few if any will be able to actually follow a set of inputs through the network of algorithms, but understanding how the network is constructed will help testers determine if another architecture might produce better results. Communicate the level of confidence you have in the results to management and users. Machine learning systems offer you the unique opportunity to describe confidence in statistical terms, so use them. One important thing to note is that the training data itself could well contain inaccuracies. In this case, because of measurement error, the recorded wind speed and direction could be off or ambiguous. In other cases, the cooling of the filament likely has some error in its measurement.
  • #21: Another type of network might be termed predictive analytics, or adaptive systems. These systems continue to adapt after deployment, using a feedback loop to adjust variables and coefficients within the algorithm. They learn while in production use, and by having certain measureable goals, are able to adjust aspects of their algorithms to better reach those goals. One example is a system under development in the UK to implement demand-based pricing for train service. Its goal is to try to encourage riders to use the train during non-peak times, and dynamically adjusts pricing to make it financially attractive for riders to consider riding when the trains aren’t as crowded. This type of application experiments with different pricing strategies and tries to optimize two different things – a balance of the ridership throughout the day, and acceptable revenue from ridership. A true mathematical optimization isn’t possible, but the goal is to reach a state of spread-out ridership and revenue that at least covers costs.