Comprehensive software reviews to make better IT decisions
Certify Trustworthiness of Your AI Applications With Cortex Certifai
CognitiveScale, an enterprise AI software vendor, released a beta version of its new product Cortex Certifai. It’s aimed at helping customers to identify, quantify, and manage risks inherent in AI applications. All without in-depth technical knowledge of AI and machine learning (ML). This is an industry-first product that addresses the need to increase transparency and trustworthiness of ML models and the AI applications they power.
ML models are, for the most part, inscrutable black boxes offering no explanation as to how the model has arrived at a particular recommendation. ML models are also inherently biased, and care should be taken to understand which groups of individuals can be disparately impacted before deploying such models into operation, as they can catastrophically affect individuals’ health and well-being, access to economic opportunities and quality of life (loans, education, housing, jobs), as well as liberty, and in some cases even life.
Cortex Certifai offers insights into these black boxes and increases transparency and trust by, among other things, evaluating key attributes that contribute to model accuracy, testing the edge cases, and asking questions such as:
- How did the model predict what it predicted?
- Is the model unfair to a particular group of people?
- How easily can the model be fooled?
It even suggests changes to the model profile, so that a person who has received an unfavorable outcome would know what to do to change that.
In addition to bias and explainability, Cortex Certifai evaluates the following areas of AI risk:
- Data risk
- Robustness in the face of adversarial attacks
Image source: Cortex Certify product description. Accessed 16 Sept. 2019.
It generates a FICO-like “AI Trust Index” score consisting of six key elements of trust: fairness/bias, robustness, explainability, accuracy, compliance, and auditability. This score allows comparison of the model externally to the industry or internally for measurement and go-live criteria.
Cortex Certifai is integrated into the AI DevOps lifecycle and systems and is available as a stand-alone product or as a container-based Kubernetes application on all major cloud providers.
CognitiveScale is a privately-held company, and its investors include Norwest Venture Partners, Intel Capital, IBM Watson, M12 (Microsoft Ventures), and USAA. It is ranked #1 in AI patents among privately held companies and #4 overall behind IBM, Google, and Microsoft. The company’s award-winning Cortex Cognitive Platform is deployed by leading global financial services, as well as healthcare and digital commerce companies.
If you have or are developing ML applications, sign up for Certifai beta list and take it out for a ride. Keep in mind, though, that machine bias is not a technical problem, and that it should not be addressed via technology only. Engage humans as well as code to identify and mitigate machine biases in your systems.
If you are just pondering AI use cases for your organization, keep this product in mind for future use.
Cortex Certifai is a welcome development, as trust is the foundation of business, especially with new technologies such as ML and AI. And with this new product, CognitiveScale joins Google, IBM, and Microsoft who have already released or are developing solutions to check for and mitigate unwanted biases in ML models and the datasets used to train them. (This is also a very hot and active area of academic and industry research, so more products and capabilities are to come.)
It remains to be seen, however, whether Cortex Certifai will live up to its name – both the intelligence implied in the “cortex” and the assurance in the “certify”– but this is a tremendous step forward.
Want to Know More?
De-Risk Your AI Applications by Proactively Identifying and Mitigating Biases
AI Registers: Finally, a Tool to Increase Transparency in AI/ML
Transparency, explainability, and trust are pressing topics in AI/ML today. While much has been written about why these are important and what organizations should do, no tools to help implement these principles have existed – until now.
What Is Emotion AI and Why Should You Care?
Recently I attended the inaugural Emotion AI conference, organized by Seth Grimes, a leading analyst and business consultant in the areas of natural language processing, text analytics, sentiment analysis, and their business applications. So, what is emotion AI, why is it relevant, and what do you need to know about it?
SortSpoke: A Recipe for Turning Unstructured Documents Into Operational Data
SortSpoke’s novel approach to machine learning answers a longstanding problem in financial services – how to efficiently extract critical data from inbound, unstructured documents at 100% data quality.
Amazon Is Offering Its Cashierless Store Technology to Other Retailers
Amazon is offering its cashierless store technology to other retailers. The technology known as “Just Walk Out” eliminates checkout lines, offering an “effortless” shopping experience and shifting store associates to “more valuable activities”.
Will AI Create the Coronavirus Vaccine?
As the COVID-19 pandemic is shutting down whole countries, a few of you may be wondering whether AI can help create a vaccine for the virus responsible. After all, AI is magic, right?
Alphabet Draws Shareholder Ire Over Human Rights – Again
Alphabet is facing backlash from its shareholders over its approach to digital privacy, reports the Financial Times. And not for the first time. This time, however, things will need to change.
EU to Invest €6 Billion to Build a Single European Data Space
The EU plans to invest €6 billion to build a single European data space, reports EURACTIV. The envisioned space will house personal, business, and “high-quality industrial data” and create the infrastructure for data sharing and use across businesses and nations.
Why Did Facebook Acquire Another AI Startup (Atlas ML)?
“Facebook quietly acquired another UK AI startup and almost no one noticed,” reported TechCrunch on February 10. We looked into why.
Dutch Court Halts Use of AI for Detecting Welfare Fraud
In a landmark ruling, a Dutch court has ordered an immediate halt to the government’s use of an automated system for detection of welfare fraud.