top of page
Search

Columbia's take on AI

Updated: Apr 3

Recently, my most pressing issue with the development of AI model at MIT has been refining the data so that it doesn't reflect irrelevant variations.

With every data set coming from multiple experiments or sources arrives one considerable issue: small differences in the recordings (typically due to varying biological practices) that can affect results in undesired ways.


However, feeding imperfect data to an AI model, the issue I am currently focused on, is just one of many contributors to AI's overall inaccuracy.


In Columbia University's data science institute, a recent notice published by the branch outlines how AI is becoming increasingly implemented in their health research––specifically for assistive mental health practice where users can receive help through AI prompts. However, their largest concern isn't just development of a useful AI system, but optimization of its accuracy all while following ethical guidelines.

This project, involving faculty James L. David and Nabila El-Bassel from the School of Social Works and biomedical informatics Lena Mamykina and Orson Xu, specifically aims to modify and regulate AI simultaneously so that the ethics portion of the project is weighed just as heavily as its actual construction. They address that while AI can significantly help groups who don't have access to mental health assistance, there are still "enormous risks" involving "concerns with safety, accuracy, equity, and appropriateness" which must be considered during the process.


The "Thinking Machines" programs in particular are very important to their research at Columbia. These programs offer a deeper insight to the actual workings of AI, including the mathematics and algorithms behind its ability to generate and decision–make. The goal is to study and improve this complex networking in place of a typical upscaling approach.


One key takeaway about Columbia's work with this project is their approach to handle the flaws in AI systems. This is especially important due to the delicate nature of mental health assistance––as with most health studies––which does not possess much room for error, especially when users arrive with hopes of comfort or self-improvement.


Columbia's take on this project is a prime example of how working with AI requires caution and care––no system is flawless, and all developers must ensure that their ethics practices are suitable for users.



Credits:

“Frontiers in Data Science and AI - the Data Science Institute at Columbia University.” The Data Science Institute at Columbia University, 2026, datascience.columbia.edu/research/grants-funding-opportunities/frontiers-in-data-science-and-ai/

 
 
 

Comments


Drop Me a Line, Let Me Know What You Think

© 2035 by Train of Thoughts. Powered and secured by Wix

bottom of page