Update README.md
Browse files
README.md
CHANGED
@@ -2,4 +2,19 @@
|
|
2 |
license: osl-3.0
|
3 |
---
|
4 |
Abstract:
|
5 |
-
Autonomous driving when applied for high-speed racing aside
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
license: osl-3.0
|
3 |
---
|
4 |
Abstract:
|
5 |
+
Autonomous driving when applied for high-speed racing aside from urban environments
|
6 |
+
presents challenges in scene understanding due to rapid changes in the track environment.
|
7 |
+
Traditional sequential network approaches might struggle to keep up with the real-time
|
8 |
+
knowledge and decision-making demands of an autonomous agent which covers large
|
9 |
+
displacements in a short time. This paper proposes a novel baseline architecture for
|
10 |
+
developing sophisticated models with the ability of true hardware-enabled parallelism
|
11 |
+
to achieve neural processing speeds to mirror the agent’s high velocity. The proposed
|
12 |
+
model, named Parallel Perception Network (PPN) consists of two independent neural
|
13 |
+
networks, a segmentation and a reconstruction network running in parallel on separate
|
14 |
+
accelerated hardware. The model takes raw 3D point cloud data from the LiDAR sensor as
|
15 |
+
input and converts them into a 2D Bird’s Eye View Map on both devices. Each network
|
16 |
+
extracts its input features along space and time dimensions independently and produces
|
17 |
+
outputs in parallel. Our model is trained on a system with 2 NVIDIA T4 GPUs with a
|
18 |
+
combination of loss functions including edge preservation, and shows a 1.8x speed up in
|
19 |
+
model inference time compared to a sequential configuration. Implementation is available
|
20 |
+
at: https://github.com/suwesh/Parallel-Perception-Network.
|