I know nothing about coding, but I see the value for AI in the “last mile” between research and audience
I used Claude Code to update my website. A personal website is simple, but the time saved here should have early-career researchers thinking bigger.
I am finalizing the terms to start a postdoc (pending rengotia.. some administrative hurdles from the NIH which have been requested from the Director’s office). I had a bit of time between defending my dissertation in late-October 2025 and my postdoc start date (tentatively) later this month which has allowed me time for introspection. I took a look at my finances and the $21/month I was spending to host my website on SquareSpace stood out to me. At the same time, Casey Newton, a tech journalist (and co-host of the “Hard Fork” podcast which I listen to weekly) wrote a blog on his journey creating a website with Claude Code. He claimed that after a few hours of work he was able to create a more personalized website than his old Squarespace template website and had proof.
When reading the article I was struck by the fact that he did it with little to no ability to code, and the website was designed purely with code. Before attempting to redesign my website with Claude Code, I had never written a line of code outside of RStudio and Stata. Many skilled programmers would scoff at me even calling the analysis and data wrangling I do there coding. That makes me a fair barometer for the viability of a product like Claude Code raising the “floor” of technical ability for researchers like me.
I have tried most of the frontier lab AI products (ChatGPT, Gemini, Claude) because I think the capabilities for tech-related tasks are great. While I am still uncomfortable using them for any consequential interpretation of data or creative writing, these are tools that came out of Silicon Valley and are clearly ready today to be used for the tasks they center their lives around (creating apps, creating websites, writing code). As a researcher outside of a highly technical field like computer science, these are all “last-mile” tasks. I have transformed quite a bit on the prospect of using AI as a researcher, but not in the sense that it will take over my job. This is not a permission slip to make all of your writing AI, your writing is a core part of your work and it to me is one of the easiest ways to convey your own style. Whenever I see a social media post, essay, or marketing material that is obviously AI-generated, it feels inauthentic. This can be inconsequential for some use cases, but in many places you want to convey that you are willing to take a step back and be thoughtful about how you present your ideas.
When I say “last-mile” I mean using AI for the things that AI companies are building these tools to excel at. We have seen AI misused to complete exams and essays, which wastes the opportunity to gain foundational knowledge education is designed to provide. Using AI for the last mile means doing all the work that is core to you and your values first, then implementing AI where one of the Silicon-Valley centered tasks would improve the use case of your work. For researchers, this means the intellectual labor (forming hypotheses, interpreting data, developing theory) stays with us, while the technical delivery tasks (building a website, formatting visualizations, writing code for routine processes) become candidates for AI assistance. There is a lot of rhetoric about AI ushering in an era where it replaces all work. That is a scary assertion, but I think that as someone who has tested a lot of these tools, AI is on a path where it replaces all mediocre work.
We are in a period of contraction for research, a period where less resources are available and the infrastructure we depend on is dramatically shifting. This blog post by Elizabeth Ginexi (former NIH Program Officer of over 20 years) is a great overview of this contraction from an NIH-centric lens (the lens most academic medical centers hold). While we are contracting, I think it could be worthwhile for researchers to use some of these tools for last mile tasks. You do not need to be a web developer to host a polished website that shows your work and can answer some of your frequently asked questions. For example, Mark Dredze, a computer science researcher at Hopkins, is one of the first people I saw with a FAQ section on his website (Dr. Dredze’s section is titled “Prospective Students”). This kind of feature handles the repetitive communication, providing quick answers instead of sending him a simple email that might get lost in hundreds of emails professors get weekly.
I have a more nuanced take on where AI falls in my research toolbox. I think that the center of our value proposition as a researcher is our brain. Secondarily it cannot replicate our human relationships. I am comfortable using AI tools for tasks like debugging code and improving data visualization because that is more of a last mile task. Again, I was paying SquareSpace $21 monthly and have been a subscriber since 2021. This was a monthly deposit to Squarespace to host my own digital business card. None of the content was sensitive, the personalization was fairly limited, and now they are consistently marketing new AI tools to enhance my website.

I am impressed by what I was able to create with Claude Code. I was somewhat satisfied with my template portfolio from SquareSpace that I held for about 5 years. You can view my new website at my domain that I used to host it on for Squarespace here (michaeldgreen.phd). I put some pictures of my old website below for comparison.


I believe the computational power of AI tools is clearly on a transformational trajectory. What does appropriate use of AI in health research actually look like? I think cardiovascular data science offers a useful illustration. From a research perspective, there are a ton of ready-made applications for AI. For example, if you have big data that is consistently available, I think it could be one of the most appropriate tools. I have seen this first hand at the American Heart Association Scientific Sessions. Dr. Rohan Khera’s Cardiovascular Data Science (CarDS) Lab at Yale has used big data from health systems and wearables to move forward detection of cardiovascular conditions and markers of health. This is a great use of AI and machine learning. For back-to-back years, Dr. Khera’s team did live workshops at Scientific Sessions for QCOR Day. They had a room of mostly clinician-scientists open Google Colab to see how the models and products of his team are trained. The code is available to everyone, which is common practice in computer science. As a population health scientist, sharing methods seems to be a bit more gate-kept. Pre-printing your work, making data and code accessible for replication, and other open science principles are not norms for us like they are in other fields.
I research discrimination faced by individuals. Whenever I submit a paper surrounding the topic of discrimination in healthcare it is often viewed from a race-based lens. I work with survey data and we are often forced to use purely cross-sectional information. The limitation here for AI is fundamental: if the training data does not adequately capture the phenomena I study (which tend to be underreported, context-dependent, and subject to measurement challenges that even human researchers struggle with), then AI tools will not help and may actively mislead. The most disparate circumstances are less often captured in the datasets that would train these models.
My own research operates under different constraints that make AI adoption less straightforward. As a population health scientist focusing on different social determinants and drivers of health, I am highly skeptical about the completeness of data on discrimination for deployment in AI solutions. With current attacks on research exploring contested social topics, I am concerned restricting this work means the development of AI tools on these topics will lag behind other areas of health research).
I do not see value in AI use in tasks which are core to your professional identity (e.g. writing a first draft of a cover letter, formatting emails) - it can waste time and comes off as inauthentic. On the other hand, I benefitted from AI for redesigning my website, saving money and making my research mission clearer and more personal. Would I delegate my statistical analysis plans over to AI? No. My research hypotheses and question formations? Not that either, considering that is a big part of my identity and why I wake up every day. But for technical tasks where I had a very real skill gap that prevented me from effectively reaching my target audiences, AI was helpful. It can open a new set of tools for disseminating our work without requiring us to hand over the parts of research that make it ours.




