You give them a problem, and they solve it using code.
It also depends on the complexity of the problem. The understanding of what constitutes "a problem" determines whether AI can write code (i.e., find a solution already stored within it) to solve it, or if mathematicians need to spend many years (possibly hundreds?) to find the ultimate solution.
Writing code from instructions provided by someone is not problem-solving. It is essentially translating a solution from one language to another. Tools like ChatGPT will likely render translation jobs obsolete.
Consequently, the standard will be elevated to a level where genuine thinking is required, beyond the capabilities of search engines, including current Large Language Models (LLMs).
Therefore, the advice to those considering a career in software development is to hone their thinking skills while studying the fundamentals of mathematics, computer science, engineering, and related fields. Particular contents might be not relevant for the job, but thinking skills are.
In fact, interview questions or mini-problems are valuable as they allow recruiters to assess how a candidate thinks, has learned, and continues to learn to think. Building software essentially involves proficiently operating with knowledge. Syntax or framework knowledge is important, but not essential.
Shameless plug - what I've recently written on a similar topic: https://medium.com/@roman-suzi/on-ai-technology-and-future-of-software-3653b9e26a74