
We at Verilog Meetup constructed an exam/interview problem that has an interesting property: if a student tries to figure out a solution by thinking by himself, he usually succeeds; however if he dumps the problem on ChatGPT, the solution fails (does not pass the automated test), and the student goes into a death spiral of futility, kicking ChatGPT to get the solution right.
There is nothing weird about the problem, we do this in the industry all the time:
A student has to write a pipelined block that computes a simple formula.
He must use the pre-existing pipelined sub-blocks for floating-point multiplication and addition. He may not write his own blocks. The latency of the given blocks is not specified in the text of the assignment; the student needs to figure it out by himself, either through simulation or by analyzing the code.
The solution is checked by a pre-existing testbench against a transaction-level model. A solution that does not get a PASS on this testbench is not considered. The student cannot submit a solution that passes only on his own testbench.

Why is such an assignment hard for ChatGPT?
The sub-blocks are created by combining code from two different GitHub repositories, plus they use parameterization.
ChatGPT cannot run RTL simulation to find out the latency of a sub-block, which is critical information for a solution. It is also not smart enough to figure out the latency by analyzing the code. It is possible to build a solution that ignores latency, uses valids and computes the formula correctly, but such a solution will not handle back-to-back transactions every cycle and will fail the given testbench as well.
ChatGPT is also not smart enough to analyze the testbench, particularly the order of events in it, specifically pushing transactions into a SystemVerilog queue and popping them out. In addition, the testbench randomizes values and cycle gaps between the transactions, so it is difficult to build something ad-hoc to get a PASS.
The formula itself is built in a way that does not allow to simply connect the submodules. I will not go into details here. If you are a student, you can try the problem by yourself and see if you get a PASS. Specifically, read the following article read the following article and implement two exercises from it:
Exercise 3. A pipelined implementation capable of accepting the formula arguments back-to-back, getting each clock cycle a new set of arguments, indefinitely and without gaps.
Exercise 2. An FSM-based implementation that uses a minimal number of arithmetic blocks but uses the fact that these blocks are pipelined inside. This assumption can reduce the number of FSM states.

What do ChatGPT-loving students attempt to counteract the difficulties they have with the assignment?
They attempt to give back a “solution” where ChatGPT develops not only the block itself, but also the sub-blocks and the testbench. My answer to this is “I cannot accept this. The point of this exercise is to see how you can build something using other people's sub-blocks and check your implementation using a pre-existing testbench.”
Other students ask many additional questions, trying to use me to generate a prompt for ChatGPT. Such as “Are you saying the result should be ready by the sixth cycle?“ Or they show me the intermediate waveform rather than the code for the same purpose.
Finally, some students question whether the problem is relevant to the job of an RTL design engineer. Like “I want to design a CPU/GPU/networking chip. Based on my background in computer architecture / computer graphics / networking this does not look like something I would do in a workplace.” My answer is: quite the opposite, all these designs have many computing blocks with static pipelines: CPUs to crunch numbers, GPUs to process vertex coordinates and fragments, networking chips to process Ethernet packet transfers. You cannot survive if you know just how to write a multiplexer in Verilog.”

The bottom line: I believe the universities should integrate more microarchitectural exercises with pipelining, flow control and similar topics into their curriculum on digital design and computer architecture. They should also use the modern techniques of functional verification against a transaction-level model. Without mastering verification, students cannot discover the corner cases of their designs.
A nice bonus of such exercises is that ChatGPT cannot do them well. If the university teachers integrate such exercises early, perhaps in the second year of school, a student has an opportunity to discover that this trade is not for him and change his major. ChatGPT is not a crutch for someone who struggles with design and is unwilling to practice, but wants to stay in tech nonetheless. There are many other jobs here in California: a person can grow strawberries in Watsonville, become a forest ranger in Yellowstone National Park or even go to Hollywood and become a movie star.
Cridits for the icons: Freepik - Flaticon: Centaur, Themis and Student.