While web agents offer an avenue to solve a plethora of tasks due to their ability to navigate the web, they are still brittle and limited on what they can reliably achieve. Common issues such as endless exploration and answer hallucination limit their deployment to ServiceNow customers. We hypothesize that guiding the agent through its navigation via task-related textual hints can improve its ability to successfully execute the task. To validate our approach, we evaluate four closed-source AI models on four diverse web tasks: form-filling, sorting, filtering, and information retrieval, with and without hints. We find that hints do help achieve higher rates of task completion, in some cases, more than tripling the success rate. However, there remain challenges, related to both the environment and models, that prevent web agents to function reliably. Our quantitative and qualitative analyses shed light on these challenges.