Welcome to Poly ML!
As experienced specialists we are often engaged to find solutions where others have not been able to do so – situations where the source data is rich but complex, where a successful solution will deliver measurable and significant financial benefit to the organization, and where there are high strategic, legal, financial or reputation risks if that solution is not robust, reliable and trustworthy.
Quality AI is not plug-and-play. We do not “rent” and deploy off-the-shelf software from other companies.
Instead, our approach is deeply rooted in advanced mathematics and data science. It has been honed in over 100 years of combined, core-team experience resulting in an entirely proprietary technology stack, spanning highly optimized implementations of classic methodologies to powerful and innovative tools and algorithms of our own designs.”
Contact us to discuss how we can help you!
Dr. Gaston Gonnet, Founder and Chief Scientist
Prof. Emeritus of Computer Science, ETH Zurich
A few things we’re great at
HYPERFLAT DATA SETS
The combination of increasing data and increased demands for population segmentation leads to data sets with many more columns than rows resulting in the high risk of false correlations – we have specific solutions developed to address this 21st C challenge.
Our high quality Feature Importance and ML Confidence algorithms allow us to detect and prevent undesired decisions including discrimination against marginalized individuals and groups. We assess bias in data, in model design and training, and in post-deployment drift.
MULTILINGUAL AND VERNACULAR TEXT
Our work in these areas and the proprietary solutions we have developed allow us to work with unusual, complex and mixed languages.
Make better predictions with less data. Remove the noise and build an optimal minimal dataset, which can improve predictor performance while reducing the costs of collecting data.
CONTINUOUS DATA FROM MACHINES AND PROCESSES
Machine and process-created data is also a PolyML speciality – from realtime manufacturing machine data to longitudinal industrial process data, we have designed and delivered solutions that have generated substantial financial savings.
Mechanical failure in large manufacturing equipment can be tremendously expensive, costing hundreds of thousands to even millions of dollars in plant downtime, repairs and mitigation. Knowing in advance that an expensive and critical piece of equipment is at risk of catastrophic failure would be tremendously valuable.
In partnership with a manufacturing data collection and visualization company, PolyML has developed a predictive maintenance solution to detect and mitigate upcoming mechanical downtime on large stamping presses. We have trained predictors specific to each individual stamping press to learn its fingerprint, and can predict with high levels of accuracy when a machine is about to suffer a mechanical fault, letting operators stop the machine gracefully and conduct the necessary preventative maintenance, thus avoiding massive failure.
Unique algorithms designed for manufacturing data unleash AI on the plant floor
Avoid machinery damage by shutting down early with Imminent Fault Detection
Intelligently adapt maintenance schedules in real time to save money
Optimize robotic welding for speed and strength while adapting to new weld conditions automatically
COVID-19 Global Pandemic Study
In mid-2020 as the COVID-19 pandemic began, Poly ML was commissioned to undertake a specific and unique research project: an analysis of the COVID-19 pandemic from the “outside in” to try to determine which, if any, macro-characteristics of nations might correlate with their populations’ differing levels of susceptibility to the emerging pandemic. The unique nature of this research led to our invention of a new technique to measure Feature Importance in a complex data set. We hope that our work can be a modest, positive contribution to the corpus of research insights of many thousands of other scientists around the world.
The macro-level data from over 160 countries was amassed and normalized, providing us with over 3200 unique features per country – as a result, processing requirements exceeded 2.1 million CPU hours and 10 19 cycles.
The correlation of country features to death rates was examined across 20 normalized dates, and a range of popularly hypothesized accelerating conditions were evaluated against the data.
Globalization, calcium intake, economic factors, environmental factors and some aspects of social quality emerged from the plethora of Marco-level data as the most predictive of early COVID-19 death rates.
The ratio of features to countries and high correlations between features required a new approach to determining feature importance. This new algorithm selects the most significant features while ensuring that their contributions to the prediction are distinct and significant.
PolyML’s Feature Importance Toolkit can be used to prove that AI solutions are accurate, reliable and trustworthy. From regulatory compliance and best practices to protecting marginalized populations, the ability to inspect AI solutions with statistical rigor and to explain them has never been more important.
AI regulations and case precedents create non-compliance liabilities for organizations and their Boards of Directors. Are you sure your AI deployments comply?
Can you fully explain the role that your sensitive data features play in machine predictions and recommendations?
Deep Learning solutions are often very useful but aren’t explainable. Our independent assessment and black box testing services open the door to risk testing and ongoing monitoring.
We can work as a partner through the design cycle to assist with designing in and confirming ongoing compliance with Ethical AI principles and Standards.
Solving High-Value Problems
The financial sector is ripe with high-value targets and difficult problems where success is measured in small increments. Tiny competitive advantages over competitors can have huge upsides. We combine our data expertise with advanced computer science algorithms to augment ML problem solving in novel ways.
Most organizations are currently investing in teams and infrastructure to deploy Machine Learning. PolyML can assist these efforts in several ways. We work alongside your ML team to complement their existing infrastructure and workflows. PolyML can also act as a neutral third-party validation for regulatory compliance. Our ML platform can ingest your training data and deployed models to independently confirm that your datasets are optimized, your methods are performant, and your deployed models are free of bias.
MASTER DATA MANAGEMENT
Advanced name, alias and address matching maps individuals and corporate entities across systems
Highly accurate models, meticulously unbiased, powering correct business decisions
Robust feature evaluation can be used to ensure compliance with any regulation while also providing internal clarity to business users
We provide a thorough, third-party evaluation service of deployed ML solutions. A standardized report is produced, detailing performance characteristics, biases, sensitivity analysis, and more
Meet our Partners
Contact us! It will only take a minute
With over 30 years experience in the financial services industry, in investment advisory and corporate finance. Joseph also has spent time in private debt, equity and founded a venture capital firm that assisted companies through capital raises. As well, Joseph has worked closely with a number of technology companies through their IPO process. Joseph studied business at Wilfred Laurier University and takes pride in his involvement in many associations and charitable groups in Waterloo region.
Gaston received his doctorate in computer science from the University of Waterloo in 1977. He is skilled in Symbolic and Algebraic computation, in particular solving equations (symbolically and numerically), system development, limit and series computation, heuristic algorithms, text searching and sorting algorithms.
He was co-founder in 1980 of the Symbolic Computation group at the University of Waterloo, the research group that produced Maple. In 1984 , Gaston co-founded the New Oxford English Dictionary project at UW, which sought to create a searchable electronic version of the Oxford English Dictionary.
Gaston is a computer science professor emeritus at ETC Zurich in Zurich, Switzerland. In 1991, he began developing the Darwin programming language for biosciences, which would become the basis for OMA, a package for gene orthology prediction
Tim holds a Master of Computer Science from McMaster University. He began working with Gaston Gonnet at the University of Waterloo on the OED project and later was an early employee at Open Text Corporation. Tim specializes in extracting valuable data from highly disparate and complex data sets.
John graduated with a Bachelor of Mathematics from the University of Waterloo in 2008. Since then, he has worked with Tim Snider and Gaston Gonnet on a variety of projects before joining PolyML in 2018. John is particularly skilled in architecting, developing and deploying machine learning solutions.
Fred holds a BA (Honours -STFXU) and MBA (Ivey). Focusing on business development at PolyML, Fred’s international experience includes director-level Product Management and Global Business development, VP-level Sales and Marketing, and CEO titles in hardware, semiconductor, software, Saas and Analytics businesses.
Full Stack Developer
A former intern at PolyML, Stephen joined the team full-time in 2021 following completion of his Bachelor of Computer Science at the University of Toronto.
Could this be you?
We have some significant projects underway we want to add a team member with skills and experience in the following areas. If you’d like to know more, please contact us!
Industrial Systems Developer
Read more about our exciting work
Please get in touch to discuss your Machine Learning needs!
572 Weber St N.,