5 Comments
User's avatar
Austin Morrissey's avatar

Teaching pedagogy ; what does it look like to train students as our tools move to higher level of abstraction?

If you were to pretend 80% of performance in science comes from %20 of the work, what has this been in your life? What mental models or concepts have yielded unexpected utility across unrelated problems?

Expand full comment
Claus Wilke's avatar

Wow, deep questions that may require entire articles as response. ;-)

Let me answer the first question for now. I'll come back for the second.

For the first question, I think it's critical to actually know the details, to build things from scratch yourself. For example, I'm teaching a class on AI in biology this fall, and a good amount of time we talk about how to implement an LLM from scratch, how perceptrons and CNNs and attention work in detail. If you don't build things from scratch yourself you never truly understand what they actually do under the hood.

Expand full comment
Claus Wilke's avatar

Ok, here's a simple answer to your second question: Collaborate with the best people in the world working on similar problems you're working on. Don't see them as competition. Don't think you have to go it alone every time. Most scientists are nice people and happy to collaborate, in particular if you have something to contribute that complements their strengths. So actively seek out those collaborations.

Also, related, your publication arch nemesis may actually be a nice person when you meet them in person. So again, try to build these connections, reach out to people even if (or in particular if) you think there's some tension. In person, you may find you actually agree on more than you disagree, and can resolve the issue.

Expand full comment
Jack Olmstead's avatar

Suppose the generative AI boom is a bubble that pops tomorrow. We have more capital expansion in computing power than ever before, and one would assume those resources would become dirt-cheap in this scenario. How could biology most benefit from these resources? What big problems are within reach when compute is not a limiting factor?

Expand full comment
Claus Wilke's avatar

I have a colleague who says this all the time: In 2-3 years GPU time will be dirt cheap because of the overinvestment we're seeing right now.

I'm not convinced we're limited by compute today, so I don't expect much to change. The biggest limitation is experimental data/verification. We can churn out computational predictions in days and then it takes weeks or months to do the necessary experiments. Additionally, we're limited by ideas I think. Better ideas always beat out more compute, and there nowhere near sufficiently many good ideas floating around in my opinion. In fact, one problem of abundant compute is it invites you to stop thinking and try to brute-force problems, and that only gets you so far.

Expand full comment