Augmenting Scholarly Publishing: Intelligent Emerging Tools & Trends
Shifting AI Policies
If you enjoy reading the newsletter, sign up or share with someone who may be interested!
“We must accept finite disappointment, but never lose infinite hope.”
― Martin Luther King, Jr.
Welcome, readers,
Hope your first few weeks of 2025 were uneventful!
🥏 January came & flew by as we tried to grapple with the evolving AI regulation scene ⛔✅, AI dominance race 🤖🙄 & the emerging scientific research landscape 💥📉 with an endless flurry of Executive Orders (high-level overview) coming from the White House💥📝 It makes us wonder if like us, 2025 has been off to a tumultuous start for many of you 🚧 & if so, we whole-heartedly empathize 🧡
➡️ Shifting Support
💣 One of the recent Executive Orders that may impact our industry directly is the shift (decrease - significant in many cases) 📉 in NIH-supported funding 💰 to research institutions that may impact the scientific output & hence the volume/quality of content that distils through the scholarly publishing industry.
Update: Halt on Trump administration’s cuts to NIH research payments expanded nationwide The pause, which is to remain in place until otherwise ordered by the court, comes after Kelley granted another temporary restraining order earlier Monday in response to a lawsuit filed by attorneys general from 22 states. That pause only applied to those specific states, meaning that the NIH policy change was still in effect in the rest of the country.
➡️ ICYMI: Guidelines, what guidelines?
💯 If you are struggling with keeping up with the evolving AI guidelines in scientific publishing, you may want to read Avi Staiman’s take on building a risk-based framework for AI guidelines in publishing in CSE’s Science Editor. In this piece, Avi shares many great recommendations to guide you in building a risk management framework, understanding the role of education & training, & exploring global partnerships for continued success.
➡️ Struggling how to use Gen AI in Science – well, now you know!
🌟 The Federation of American Societies for Experimental Biology (FASEB) released its public-facing Recommendations for Generative AI in the Biological and Biomedical Sciences collated by its Gen AI Task Force over the course of 2024. These recommendations focus on 5 key themes: Policy & Regulation, Scientific Integrity & Intellectual Property, Data Privacy & Security, Diversity, Equity, Accessibility, & Inclusion, & Workforce Impact, Training, & Education.
➡️ Ethically integrate AI videos in your work - tell me more!
📽️ Adobe has released a new Firefly video model in beta that it claims is the first safe generative AI video model. Adobe says it has spent a lot of money to license and build its video model so you don’t have to worry about brand logos or not safe for work (NSFW) content. The big questions are how much work done by professionals will be automated and how long can Adobe walk this fine line?
➡️Same same but different different
📢 New models on the block, but similar challenges with bias, safety, & source reliability
Deepseek V3 and R1 are open-weight AI models developed in China. V3 excels in reasoning, coding, & multilingual tasks, while R1 integrates retrieval-augmented generation (RAG) for more accurate responses. However, both operate under China’s strict regulations, raising concerns about censorship.
Qwen 2.5 Max developed by Alibaba, is a high-performing open-weight AI model with strong capabilities in reasoning, coding, and multilingual tasks. While it pushes the boundaries of open AI, its development within China’s regulatory framework raises questions about censorship & content limitations.
OpenAI o3-mini aims to match O1-mini's speed and cost while offering better performance on science, math, & coding tasks. The model runs on OpenAI's closed architecture, making it hard to independently verify its performance or safety.
OpenAI Deep Research is a high-level AI model designed for deep, specialized tasks, including advanced reasoning, scientific research, & data analysis. It’s built to support cutting-edge research across various domains but is not open-source, meaning access & customization are limited.
Gemini 2.0 Flash is a strong AI model that handles tasks like reasoning, coding, & different languages well. While it’s not open-source, meaning less transparency, it’s widely used across Google’s platforms.
Gemini Deep Research is built for tackling complex research questions. It’s not open-source, so access is limited, but it’s designed to assist with advanced problem-solving.
➡️ Feedback Corner
Your thoughts & comments welcomed: augmentscholpub@gmail.com; the newsletter was:
Until next newsletter,
Chhavi Chauhan and Chirag Jay Patel
➡️ About the Authors
Dr. Chhavi Chauhan is a science enthusiast, a former biomedical researcher who now works for the American Society for Investigative Pathology. She is the Founder & President of Samast AI, a renowned AI Ethicist, a serial volunteer, & serves on the Boards for multiple mission-driven organizations. Besides work, she enjoys playing board games, hiking, & working in her yard.
Chirag “Jay” Patel is a Jack of all trades, master of none. He has always been a salesperson & works for Cactus Communications, is a SDG Publishers Compact Fellow & volunteers for SSP & ISMPP. His interests lie at the crossroads of AI research integrity, science communication, & Sustainable Development Goals. When he is not working, he likes to goof around with his kids, read, listen to a podcast, spend time in the garden, or go for a walk.
Both authors are thought leaders, renowned public speakers, & invited blog writers.
➡️ Can’t Have Enough of Us!
➡️ Join SSP’s AI Community of Interest (AI CoIN) that we co-facilitate with >200 members
🗓️ Authors’ Recent & Upcoming Speaking Engagements & Updates:
Chhavi
➡️ AI Faculty: CSE Virtual DEIA Short Course
➡️ Panelist: “What Does AI Mean for Journals?” session at National Academy of Sciences Journal Summit in Washington D.C.
Jay
➡️ Panelist: Ethics in Publishing Panel APS Global Physics Summit in Anaheim, CA




