Jump to content

AI in Neuroscience


Sam Staples

Recommended Posts

Sam Staples

With the rise of AI programs such as ChatGPT, how can neuroscientists remain accountable to scientific rigor?

Link to comment
Share on other sites

  • 2 weeks later...
valeria muoio

 

I think the general principle of scientific research must always be ethics, and the neuroscientist must keep this in mind. So, I cannot conceive of a world where neuroscience is not linked to ethics, even though I know that this is not always true. I believe that the full use of ChatGPT  without it being  only instrumental for ideas  (or helping with creativity blocks - who hasn't never had them?) is wrong and it goes against the ethical principles of intellectual property or creativity, of expression of science as a whole. ChatGPT  does not obey the rules of academia, and still does not obey the rules of science.

That said, the reality is that  ChatGTP , Leonardo and others are  here to stay. And if so, we should see it naturally and as a sign of the cultural revolution and the passage of time. And better than anything else, Neuroscience aways had thrived in challenge times. We have all the tools to adapt, so  neuroscience and neuroscientists will adapt to this new cultural and intellectual transformation.But  we need to have rules and establish standards of conduct about  the instrumental , not protagonist, use of artificial intelligence .We need to stablish good practices and make use of verification tools (like AI detector programs).There are already good artificial intelligence detectors and I believe that they may be useful for scientific websites  and accontable neuroscience journals  .

 ChatGTP  is  a cultural , intelectual and social transformation . And like everything in humankind history, we must learn to adapt in a good and ethical way. Neuroscientists can play an important role in this adaptation and ajust ChatGTP to this new world of us.

  • Like 1
Link to comment
Share on other sites

Jayalakshmi Viswanathan

This is a great question to consider and discuss, because AI has the potential to exacerbate the existing rigor and reproducibility crisis, or to ameliorate it. As a contractor doing Program Development at NIA, I work on the Alzheimer's Preclinical Efficacy Database, or AlzPED (https://alzped.nia.nih.gov/) and analysis of publications in the field of testing Alzheimer's therapeutics in animal models shows the trends of poor rigor and reporting practices that are not getting better over time (atleast not fast enough). A crucial aspect of AI/ML models is that junk-in means junk-out - that is, the integrity of the data that models are trained on will dictate the utility and validity of these models. In considering how AI may intersect with rigor and reproducibility - it does come down to how scientists may use any other tool. Researchers, journals, non-profits, and funding agencies need to work to improve the rigor and reproducibility crisis, and as suggested previously, develop best practices, standards, and accountability measures for AI in parallel.

Edited by Jayalakshmi Viswanathan
  • Like 1
Link to comment
Share on other sites

  • 3 weeks later...
Hugo Sanchez-Castillo

Neuroglionanobots… a pretty cool name, right? In the past, I considered myself a huge fan of IA, however, I didn’t consider how much AI could impact our work as scientists. I didn’t consider how many human activities can be affected by this new way of getting, organizing, and managing information. However, there are dark paths in which we need to be careful. The other day I saw a Facebook post in which a young fella showed how to do a Facebook place to get money. This guy showed how the AI was capable of choosing the best topic (health and self-care in this example), and the AI was capable of creating evidence that supports the contents of the page and creating smart quotes or highlights to get attention. At the end, this guy looks at the camera, so proud of his “work” and says that all this work was done in minutes thanks to the AI. So many problems in that post, in the first instance, one guy without any kind of preparation in health, medicine, pharmacy, etc. can do a place to advise on self-care and nutrition; secondly, nobody can corroborate the information because there is no clear how the IA took the information from the web; in third place, where is the ethics of doing in that way?; I can continue, but I believe that my point is out there.

 

The IA feeds itself from the entrances that we do regularly, in the form of papers, books, blogs, comments, etc. but if we continuously use it to produce the products used for their feedback, then what??? Did you see the problem??? In the long term, the IA could be fed in a loop without entrance from the exterior… I know that I may be Im overreacting, but the truth is that we need laws to regulate the use of AI; we need to incorporate ethics in this discussion and we need to show to the new generations the implications of the misuse of AI.

 

Long Live and Prosper humans

Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in

×
×
  • Create New...