Suburban Driving Dataset
Sample suburban driving dataset to assess compatibility.
Light condition: Mostly static
Traffic condition: Moderate
Features: Roundabouts, stop signs, traffic lights, 4-way intersections, roads with different elevations, no-lane roads, suburban neighborhoods, underpasses, two-way/one-way traffic, bicycle lanes, curbside parked vehicles, cars, trucks, vans, pedestrian crossings.
Sample suburban driving dataset to assess compatibility.
Light condition: Mostly static
Traffic condition: Moderate
Features: Roundabouts, stop signs, traffic lights, 4-way intersections, roads with different elevations, no-lane roads, suburban neighborhoods, underpasses, two-way/one-way traffic, bicycle lanes, curbside parked vehicles, cars, trucks, vans, pedestrian crossings.
Sample suburban driving dataset to assess compatibility.
Light condition: Mostly static
Traffic condition: Moderate
Features: Roundabouts, stop signs, traffic lights, 4-way intersections, roads with different elevations, no-lane roads, suburban neighborhoods, underpasses, two-way/one-way traffic, bicycle lanes, curbside parked vehicles, cars, trucks, vans, pedestrian crossings.
Overview of Data Packages
All datasets are recorded using various sensors, including:
Three cameras: RGB Exterior, RGB Interior, and Thermal
360-degree Lidar
HD Radar
Inertial Navigation System (INS)
Controller Area Network Bus (CAN Bus)
The data package includes the following components:
ROS Data: The data has been recorded in ROS format and comprises various data topics from cameras, lidar, radar, GPS, IMU, and vehicle CAN data.
Further, this data can be processed upon request, and it will be generated by feeding camera data through neural networks. Processed data includes lane, object, and drivable space detection plus lane width estimation.
Synchronized Data: This raw data consists of JPGs for camera data, PCD for lidar and radar data, and JSON for other data, including the radar object list. The data is synchronized in time using VSI algorithms with minimal error.
Extrinsic Translations and Rotations: This refers to the spatial relationships between different sensors in a multi-sensor system. These translations and rotations describe how each sensor is positioned and oriented relative to a common reference frame which INS as of VSI system.
Lidar to Exterior Camera Extrinsic (3D): This provides the geometric transformation between the lidar sensor, typically mounted facing forward on the vehicle, and the forward-facing exterior camera. This transformation provides a mapping between the 3D point cloud data generated by the lidar and the corresponding image captured by the camera.
Radar to Exterior Camera Extrinsic (2D): This calibration data refers to determining how the radar's measurements correspond to the camera's field of view, considering aspects such as translation, rotation, and potentially other transformation parameters
Camera Intrinsics: This provides the internal parameters of a camera that define its imaging characteristics. These parameters include the focal length, optical center, and lens distortion coefficients. This intrinsic data from all three cameras makes it possible to correct for distortions and accurately interpret images captured by the camera.
Sensor Specifications: This includes datasheets for all sensors to understand the technical characteristics and performance of sensors used in a system.