I find that the summaries can be on-point but lack any compelling articulation of the contribution. I hate writing the literature review as much as the next person, but slogging through the relevant “conversation” is the only way to explain how my innovation moves the frontier.
Good point. However, I have seen it pre-AI too. Then I called it a hybrid copy-edit/lazy-writing. Thesis or manuscripts that will incrementally add precision to previously published methods but would lack similar precision on the novel methods.
I recently peer reviewed a manuscript for a reasonably prestigious ophthalmology journal and noted a similar thing. The introduction was exceedingly poor quality English but the discussion at the end saw a dramatic improvement. However, it restated the same points again and again, focussing mostly on what was already known. There was, shockingly, no discussion of the study's limitations or suggestions for further research questions (what I would expect from an LLM). However, I only noted the sudden change in quality of English in my review, stopping short of noting my suspicions of AI usage in generating the discussion from the rest of the paper. But should I have done? At what point do you think one has enough evidence to make what is a very serious accusation? And what even counts as evidence here?
I think I would always focus on the objective flaws of the document itself (introduction is poorly written, discussion is repetitive and misses important components) instead of commenting on how the document was generated. In the end, we can never know how the document was written. We can only comment on what we see.
I teach undergraduate courses at a university in Burnaby, British Columbia, with students from Canada, the US, China, Nigeria, and beyond. Having struggled to learn new languages myself, I’m mindful of students with English as a second language—less critical of their grammar for grading, but without softening my comments or suggested revisions. The goal is twofold: to learn the material and to learn how to write. I use On Writing Well by William Zinsser in all my courses. My main aim is for students to understand the history of public health and population strategies; a close second is helping them become clearer, and more effective communicators.
This is a good observation. I’ve noticed that LLMs can summarize information incredibly in-depth and expertly, but also in a way that doesn’t sound like a person would naturally explain it. I wonder if English ability isn’t the issue here so much as incentive structures for research and productivity, where any publication can be good enough, and LLMs offer the path of least resistance.
I started to notice that type of writing in my government courses this summer: over-explanation of the background and under-explanation of the point.
I find that the summaries can be on-point but lack any compelling articulation of the contribution. I hate writing the literature review as much as the next person, but slogging through the relevant “conversation” is the only way to explain how my innovation moves the frontier.
Good point. However, I have seen it pre-AI too. Then I called it a hybrid copy-edit/lazy-writing. Thesis or manuscripts that will incrementally add precision to previously published methods but would lack similar precision on the novel methods.
I recently peer reviewed a manuscript for a reasonably prestigious ophthalmology journal and noted a similar thing. The introduction was exceedingly poor quality English but the discussion at the end saw a dramatic improvement. However, it restated the same points again and again, focussing mostly on what was already known. There was, shockingly, no discussion of the study's limitations or suggestions for further research questions (what I would expect from an LLM). However, I only noted the sudden change in quality of English in my review, stopping short of noting my suspicions of AI usage in generating the discussion from the rest of the paper. But should I have done? At what point do you think one has enough evidence to make what is a very serious accusation? And what even counts as evidence here?
I think I would always focus on the objective flaws of the document itself (introduction is poorly written, discussion is repetitive and misses important components) instead of commenting on how the document was generated. In the end, we can never know how the document was written. We can only comment on what we see.
I teach undergraduate courses at a university in Burnaby, British Columbia, with students from Canada, the US, China, Nigeria, and beyond. Having struggled to learn new languages myself, I’m mindful of students with English as a second language—less critical of their grammar for grading, but without softening my comments or suggested revisions. The goal is twofold: to learn the material and to learn how to write. I use On Writing Well by William Zinsser in all my courses. My main aim is for students to understand the history of public health and population strategies; a close second is helping them become clearer, and more effective communicators.
This is a good observation. I’ve noticed that LLMs can summarize information incredibly in-depth and expertly, but also in a way that doesn’t sound like a person would naturally explain it. I wonder if English ability isn’t the issue here so much as incentive structures for research and productivity, where any publication can be good enough, and LLMs offer the path of least resistance.