In this interview Prof. Yates reflects on 25 years of Journal of Proteome Research, sharing how the journal has evolved to meet new challenges and shape new fields of research.

A headshot of Prof. John R. Yates, III, set against an abstract blue and teal background.

Journal of Proteome Research is celebrating 25 years in publication.

Proteomics has changed immensely since 2002, evolving from a method-driven field into a cornerstone of biology and medicine. John R. Yates, III (The Scripps Research Institute) has been with the journal every step of the way—from authoring the very first paper in JPR, to becoming an editorial board member, and then becoming Editor-in-Chief in 2016. Most recently, Prof. Yates was named a 2026 winner of the Canada Gairdner International Award for “establishing the foundations of modern systems proteomics through transformative innovations in quantitative protein measurement, mass spectrometry technologies, and computational analysis.” This recognition, alongside the many achievements of JPR authors, reflects the deep impact that the JPR community has made to define the field.

We asked Prof. Yates to reflect on this milestone anniversary, share his editorial vision, and imagine what the future might hold for the journal.

Reflections on 25 Years

What has been the biggest change that you've seen for the journal during your career?

The biggest change I have seen resulted from the transformation of proteomics from a technically challenging, largely method-driven field into a central component of modern biology and medicine.

When the journal began, much of the work focused on demonstrating that we could identify proteins at scale and on improving methods to increase scale. Most of the papers we saw in the early years described new tools, mass spectrometers, separations, and computational methods that made proteomics at scale feasible. Today, those capabilities are largely taken for granted, and proteomics is used routinely to address complex biological and clinical questions, so the journal now reflects a field that is far more integrated with biology. From my perspective as an author, editorial board member, and now Editor-in-Chief, the evolution has been remarkable. The growth of the journal has mirrored the growth of the field, shifting from publishing foundational methods to showcasing applications that are deeply embedded in systems biology, disease research, and increasingly, clinical science.

What are some research trends that have emerged over the course of the journal's history that have stood the test of time? Do you think the trend(s) will continue?

One enduring trend is the drive toward greater depth, accuracy, and quantitative rigor in proteomics. Early on, the focus was on identifying as many proteins as possible. That quickly evolved into reliably quantifying them, and then to understanding post-translational modifications and protein interactions. Another trend that has clearly stood the test of time is the integration of proteomics with other disciplines, including, initially, genomics and transcriptomics, and more recently, metabolomics. This integrative approach is now essential, not optional.

I also think targeted proteomics has proven to be incredibly durable. While discovery proteomics continues to advance, targeted approaches remain critical for validation and translation, particularly in clinical contexts. These trends will absolutely continue, but they will be shaped by new technologies that increase throughput and reproducibility. The field is moving toward a point where measuring the proteome becomes routine, and the challenge shifts to interpretation.

Another important emerging trend is the rise of affinity-based proteomic platforms for large-scale serum analysis. Serum has always been a challenging matrix because of the enormous dynamic range of protein abundances, which makes comprehensive measurement by traditional mass spectrometry difficult. These technologies have provided a practical way to work around this limitation, enabling the consistent measurement of targeted protein panels across very large patient cohorts. What’s particularly impactful about these approaches is their scalability; they allow thousands of samples to be processed with high reproducibility, which is essential for population-level and clinical studies. While affinity-based proteomic techniques don’t yet offer the same breadth as unbiased proteomics, they have clearly opened the door to population scale studies that weren’t previously feasible, especially in translational and clinical research.

Can you reflect on any recent research published in the journal that would have been unimaginable back in 2002? How has the field advanced to make this possible?

There are several examples, but single-cell proteomics is probably the most striking. The idea that we could measure thousands of proteins from a single cell would have been difficult to imagine in 2002. At that time, we were excited to identify a few thousand proteins from bulk samples. Now we’re looking at cellular heterogeneity at the protein level.

Similarly, the scale and speed of modern data acquisition would have been unimaginable. Technologies like data-independent acquisition and instruments with extremely fast scan rates have fundamentally changed what’s possible. We can now analyze large cohorts with a level of reproducibility that simply wasn’t achievable before.

What made this possible was the convergence of advances in instrumentation, sample preparation, and computation. Mass spectrometers became faster and more sensitive, separations improved, and software caught up to the data. Just as importantly, the field matured in terms of experimental design and statistical rigor. Proteomics is no longer just about collecting data, it is about generating reliable, interpretable results.

Your Editorial Legacy

When you came on as EIC to the journal, you wanted to expand its reach in metabolomics and systems biology. How have you delivered on that vision and what work do you think remains to be done?

From the beginning, I viewed proteomics as part of a broader effort to understand biological systems at the molecular level. The technologies that enabled proteomics also enabled metabolomics, and the two fields are naturally complementary. Over the past 10 years, we’ve made a concerted effort to bring metabolomics and systems biology more fully into the journal, both through editorial board expertise and through the types of papers we publish. We also have leading researchers in metabolomics on our editorial team.

We’ve seen strong growth in submissions that integrate proteomics with metabolomics, network biology, and computational modeling. That’s been very encouraging, but there is still work to be done. One challenge in incorporating integrative studies is ensuring that they live up to the level of rigor that proteomics studies have been held to over the years. Another challenge has been broadening our community so that researchers in metabolomics and systems biology see JPR as a natural home for their work.

You authored the very first article in JPR. What was that paper about and why did you choose to publish in the journal? Why should authors today publish in JPR?

The first paper I published in JPR was on a software tool to enable large-scale protein identification using multidimensional LC-MS/MS. At the time, this was a relatively new approach, and it was clear that the field needed a dedicated venue to publish this kind of work. JPR provided that platform.

I chose to publish in the journal because it was designed specifically for this emerging field. It was a place where methodological innovation and biological application could coexist, and this remains true today. JPR continues to be a journal that values both technical rigor and biological insight.

For authors today, the reason to publish in JPR is that it is deeply connected to the proteomics community. The editors and reviewers understand the nuances of the field, and the journal has a strong track record of publishing work that moves the field forward. It’s a place where your work will be evaluated by people who appreciate both the challenges and the opportunities in proteomics.

The journal recently published its 10,000th article—what would you like to say to the authors who have contributed to the journal and made this milestone possible?

Reaching 10,000 articles is a significant milestone, and it reflects the collective effort of the entire community. The authors who have published in JPR over the years have defined the field. They’ve developed the methods, asked the important questions, and pushed the boundaries of what’s possible. I would simply say thank you. The success of the journal is a direct result of the quality and impact of the work that has been submitted to it. It’s been a privilege to see how the field has grown through these contributions.

On the Horizon

What do you think JPR will look like 25 years from now?

If the past 25 years are any indication, the next 25 will be even more transformative. I think JPR will continue to evolve alongside the field, which is becoming increasingly integrated and data driven. We’re already seeing the convergence of proteomics, metabolomics, and other omics approaches, and that will only accelerate. I also hope that JPR will be a place where the analytical technologies are published that continue to move the field.

I expect the journal will publish more work that combines experimental and computational approaches, including the use of artificial intelligence to interpret complex datasets. Proteomics will likely become more routine, even in clinical settings, and the journal will reflect that shift.

At the same time, I hope JPR maintains its focus on rigor and innovation. No matter how much the technology changes, those principles will remain essential.

What advice would you give to early-career researchers looking to contribute to the journal?

My advice would be to focus on asking important questions and to be rigorous in how you answer them. It’s easy to get caught up in the technology, but ultimately what matters is the biology. Use the tools to address meaningful problems, and make sure your work is reproducible, well supported, and validated.

I would also encourage early-career researchers to think broadly. The field is moving toward integration, so being able to connect proteomics with other approaches will be increasingly valuable.

What emerging areas of inquiry are you most excited about?

There are several areas that are particularly exciting right now. Single-cell proteomics is enabling entirely new ways of looking at biological systems. Spatial proteomics is providing context that we didn’t have before. And the continued development of high-throughput methods is making it possible to study large populations.

I’m also very interested in how artificial intelligence will impact the field. We’re generating enormous amounts of data, and the ability to quickly interpret that data is increasingly important. AI has the potential to change data interpretation in a fundamental way and hopefully to speed up biological interpretation of these data sets.

More broadly, I think the biggest disruptor will be the ability to integrate different types of data into a coherent understanding of biology. That’s the direction the field is heading, and it’s where I think we’ll see the most significant advances.

Want the latest stories delivered to your inbox each month?