Skip to content

Our Capability

Process' and Applications

Decision Support

AI enabled decision support to make Bayesian reasoning that is explainable and uses specific expert knowledge. Decisions can be probabilistic diagnosis or deterministic grading for pass fail. Natural language allows users to interact with the system easily and trust the outputs with simple explanations and reports that are similar to existing systems.

Stage 1: Determine mission outputs
Gather outputs required to be valuable to the operator and the mission. Establish how these requirements will be quantified.

Stage 2: Select Clinical Assessment Battery
Used for AI model training and validation.

Stage 3: Determine field protocol for data collection
Representative data for different scenarios, sufficient collection of assessment battery to enable high resolution of AI training.

Stage 4: Data Collection Tools
To be used by research staff on field , that may be fatigued and distracted with intermittent power, data logging and communications.

Stage 5: Label Data
Add labels to data to enable AI model to learn.

Stage 6: Protocol study
Perform prototype study on low subject numbers to prove functionality of the protocol, tools and computational models.

Stage 7: Large Study
Perform field trial to validate the research.

Stage 8: Deploy
Deploy fieldable tool

Stage 1: Determine mission outputs
Gather outputs required to be valuable to the operator and the mission. Establish how these requirements will be quantified.

Stage 2: Select Clinical Assessment Battery
Used for AI model training and validation.

Stage 3: Determine field protocol for data collection
Representative data for different scenarios, sufficient collection of assessment battery to enable high resolution of AI training.

Stage 4: Data Collection Tools
To be used by research staff on field , that may be fatigued and distracted with intermittent power, data logging and communications.

Stage 5: Label Data
Add labels to data to enable AI model to learn.

Stage 6: Protocol study
Perform prototype study on low subject numbers to prove functionality of the protocol, tools and computational models.

Stage 7: Large Study
Perform field trial to validate the research.

Stage 8: Deploy
Deploy fieldable tool

Contact Us

01: Observe

The first part of the Ambient Cognition Crux Decision stack is the Observe layer. Depending on the needs of the client and the knowledge/database created with them we gather data through general sensors such as video analysis and human language as well as client project area specific sensors such as ECG and heart rate sensors for medical diagnosis or satellite imaging for surveying.

Sensor Example: Computer Vision
Passive Monitoring

The use of machine learning to locate faces in view of a camera and use computer vision and data analytics to measure features that can be used for higher level algorithms delivering Facial detection, Vital signs, Fatigue, Cognition.

Can be used from a distance or up close.
Potentially integrated with helmet or safety glasses.

SIGNALS:
EOG, blink, pupil tracking, pupil dilation, saccades
vPPG, heart rate, HRV, resp rate
Temperature, GSR, emotional flushing

Grading – Seedling Selection

Cv to detect seedlings and grade them in a manner that humans do.

Object detection, calibrated cameras, disional assessment, colour and disease detection. Robot control outputs.

COGNI & MATB

We also have the ability to gather data for more complicated aspects like cognitive function and field performance. This is where COGNI and MATB come in.

COGNI

COGNI is an iPhone app designed to meet the need of researches and personnel in remote environments that to measure objective data on human performance.

COGNI is an innovative iPhone app designed to evaluate Cognitive fatigue and performance in wilderness settings. Developed with outdoor enthusiasts, athletes, and professionals in mind, COGNI provides comprehensive tools and assessments to monitor and enhance Cognitive function in challenging environments.

Learn More

MATB

MATB is a software app for cognitive assessment, designed by NASA for pilot selection. It provides a broad cognitive assessment, including planning, tracking, reaction to random events, problem-solving under time pressure, reactions to audio input, and short-term memory.

Learn More

02: Orient / Understand

The second part of the Ambient Cognition Crux Decision Stack is the Orient / Understand layer. Here we take all the data from our sensors and parse them using our knowledge and data base to understand the data with context. For example when we see spikes in data from an ecg machine we can analyze this to give us heart rate.

03: Decide

The third part of the crux stack is the decision layer. Here we use a Bayesian engine along with our multi agent system to determine a probable “diagnosis” based on data from previous layers as well as seeking further information to narrow down the “diagnosis.”

Agentic System
Passive Monitoring

The use of machine learning to locate faces in view of a camera and use computer vision and data analytics to measure features that can be used for higher level algorithms delivering Facial detection, Vital signs, Fatigue, Cognition.

Can be used from a distance or up close.
Potentially integrated with helmet or safety glasses.

SIGNALS:
EOG, blink, pupil tracking, pupil dilation, saccades
vPPG, heart rate, HRV, resp rate
Temperature, GSR, emotional flushing

Grading – Seedling Selection

Cv to detect seedlings and grade them in a manner that humans do.

Object detection, calibrated cameras, disional assessment, colour and disease detection. Robot control outputs.

04: Action

The Final Part of the Crux stack is action. Here all the data and outcomes made in previous layers is reviewed by the subject matter expert who now has access to a vast array of not only information but also a system which has analyzed all the data and provides valuable input as to what action needs to be taken.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Cras nibh ante, imperdiet vitae fermentum vel, accumsan eget orci. Donec tempor purus nibh, a tincidunt massa dignissim vitae. Morbi vel sodales sem. Praesent ut mi sed nunc aliquam tempus sed viverra mauris.

Our Capability in Use:

Astronaut Selection

The system is enabled as an AI powered website to automate the medical and business review process when selecting an astronaut by understanding the individual risks and cost to fly, culminating in a NASA Aeromedical Board review and approval.

Interview questions are unique to the candidate to inform medical findings enabled by the Ambient CRUX cognitive platform. Once the interview is complete CRUX generates a medical report including findings, waivers, follow up actions, risk assessment and expected costs to mitigate any risks.

The product is open to other uses such as commercial and military pilot screening and selection and any application that requires many people to be individually interviewed based on need and business decisions to be made at scale.

Back To Top Click to access the login or register cheese