Though many teleoperation interfaces have been made for mobile manipulation tasks, the majority of these interfaces are inaccessible to people with severe physical disabilities, and the interfaces that are accessible are extremely slow, requiring around three minutes to complete a simple manipulation task, such as picking up and moving a soda can a few feet. People with severe physical disabilities are those who could benefit the most from these teleoperation interfaces, because through teleoperation, they can complete tasks that they can not perform on their own, such as opening the door, fetching an object, or making the bed. For a teleoperation interface to be accessible to people with severe physical disabilities, it must rely on cursor movement and clicks, or speech commands. Because the current accessible interfaces are slow, my research goal is to design new interfaces that are both accessible and brisk. I have designed three of these interfaces so far and am in the process of finishing implementation. The interfaces use two orthogonal views instead of the standard one view, thus allowing the user to grasp a complete picture of the robot and the world without needing to change the viewing angle, a slow and frustrating process. The three interfaces which allow for control of the end effector consist of two types of voice commands, and one button interface. Soon I will be conducting user studies with the new interfaces to quantify their improvement over the current standard.