It's a dilemma as old as time. Friday night has rolled around, and you're trying to pick a restaurant for dinner. (Assuming there's still reservations since you waited until the last minute to book). Anyways, should you go to your most beloved watering hole, or try a new establishment, in the hopes of discovering something superior? Potentially, but that curiosity comes with a risk: you explore, and the food could be worse, or you exploit, and fail to grow out of your narrow pathway.
Curiosity drives AI to explore the world, now in boundless use cases—autonomous navigation, robotic decision making, optimizing health outcomes. Machines, in some cases, use "reinforcement learning" to accomplish a goal, where an AI agent iteratively learns from being rewarded for good behavior and punished for bad.
Just like the dilemma faced by humans in selecting a restaurant, these agents also struggle with balancing the time spent discovering better actions (exploration) and the time spent taking actions that led to high rewards in the past (exploitation). Too much curiosity can distract the agent from making good decisions and too little means the agent will never discover good decisions.
To read more, click here.