At the terminus of every robotic manipulator is an end-effector. Sensors mounted at the end-effector provide egocentric perception, enabling the robot to touch and see the world from a unique viewpoint. Our existing wireless perception module has been able to stream visual (RGBD) and force (haptic) data wirelessly to other devices and is integral to our autonomous feeding robot application. As our robot applications have grown in scope, the demand for more sensors with higher quality and greater frequency has increased too. This research focuses on identifying the bottleneck of the data transmission speed of multiple sensors and implementing task-driven data extraction and compression methods. The module compresses the sensor data with optimized processing methods, such as real-time object detection, face detection, and pressure prediction. Additionally, this project involves a hardware design portion which improves the previous design and aims to easily exchangeable mounting technique and sensors. This research uses the Nvidia Jetson TX2, which is able to complete more complex tasks in shorter computing power than the Intel Joule used in the last version. The new embedded board mainly uses Python to run deep learning instances and uses low-level packages and hardware encoders to interact with sensors and cameras. This project also compares processing speed with different deep learning frameworks on Jetson infrastructure and results in a faster and more accurate solution. The sensor module becomes a processing node in the robot network, freeing the robot to focus on task-level computation rather than lower-level perception calls.