Search tasks can be challenging for blind or visually impaired people. To determine an object’s location and to navigate there, they often rely on the limited sensory capabilities of a white cane, search haptically, or ask for help. We introduce MR-Sense, a mixed reality assistant to support search and navigation tasks. The system is designed in a participatory fashion and utilizes sensory data of a standalone mixed reality head-mounted display to perform deep learning-driven object recognition and environment mapping. The user is supported in object search tasks via spatially mapped audio and vibrotactile feedback. We conducted a preliminary user study including ten blind or visually impaired participants and a final user evaluation with thirteen blind or visually impaired participants. The final study reveals that MR-Sense alone cannot replace the cane but provides a valuable addition in terms of usability and task load. We further propose a standardized evaluation setup for replicable studies and highlight relevant potentials and challenges fostering future work towards employing technology in accessibility.