Skip to content

Commit

Permalink
Add a point cloud option for Record3D (#3556)
Browse files Browse the repository at this point in the history
* Add a point cloud option for Record3D

* Change parameter names and add voxel size parameter

* fix record3d point cloud docs formatting
  • Loading branch information
Jameson-Crate authored Dec 24, 2024
1 parent 14f1d4e commit 0ced5ce
Show file tree
Hide file tree
Showing 6 changed files with 72 additions and 9 deletions.
35 changes: 30 additions & 5 deletions docs/quickstart/custom_dataset.md
Original file line number Diff line number Diff line change
Expand Up @@ -268,6 +268,31 @@ ns-process-data record3d --data {data directory} --output-dir {output directory}
ns-train nerfacto --data {output directory}
```

### Adding a Point Cloud

Adding a point cloud is useful for avoiding random initialization when training gaussian splats. To add a point cloud using Record3D follow these steps:

1. Export a Zipped sequence of PLY point clouds from Record3D.

<img src="imgs/record_3d_video_example.png" width=150>
<img src="imgs/record_3d_export_button.png" width=150>
<img src="imgs/record_3d_ply_selection.png" width=150>


2. Move the exported zip file to your computer from your iPhone.


3. Unzip the file and move all extracted `.ply` files to a directory.


4. Convert the data to nerfstudio format with the `--ply` flag and the directory from step 3.

```bash
ns-process-data record3d --data {data directory} --ply {ply directory} --output-dir {output directory}
```

Additionally you can specify `--voxel-size {float}` which determines the level of sparsity when downsampling from the dense point clouds generated by Record3D to the sparse point cloud used in Nerfstudio. The default value is 0.8, lower is less sparse, higher is more sparse.

(spectacularai)=

## Spectacular AI
Expand All @@ -292,13 +317,13 @@ pip install spectacularAI[full]
2. Install FFmpeg. Linux: `apt install ffmpeg` (or similar, if using another package manager). Windows: [see here](https://www.editframe.com/guides/how-to-install-and-start-using-ffmpeg-in-under-10-minutes). FFmpeg must be in your `PATH` so that `ffmpeg` works on the command line.

3. Data capture. See [here for specific instructions for each supported device](https://github.com/SpectacularAI/sdk-examples/tree/main/python/mapping#recording-data).

4. Process and export. Once you have recorded a dataset in Spectacular AI format and have it stored in `{data directory}` it can be converted into a Nerfstudio supported format with:

```bash
sai-cli process {data directory} --preview3d --key_frame_distance=0.05 {output directory}
```
The optional `--preview3d` flag shows a 3D preview of the point cloud and estimated trajectory live while VISLAM is running. The `--key_frame_distance` argument can be tuned based on the recorded scene size: 0.05 (5cm) is good for small scans and 0.15 for room-sized scans. If the processing gets slow, you can also try adding a --fast flag to `sai-cli process` to trade off quality for speed.
The optional `--preview3d` flag shows a 3D preview of the point cloud and estimated trajectory live while VISLAM is running. The `--key_frame_distance` argument can be tuned based on the recorded scene size: 0.05 (5cm) is good for small scans and 0.15 for room-sized scans. If the processing gets slow, you can also try adding a --fast flag to `sai-cli process` to trade off quality for speed.

5. Train. No separate `ns-process-data` step is needed. The data in `{output directory}` can now be trained with Nerfstudio:

Expand Down Expand Up @@ -453,7 +478,7 @@ If cropping only needs to be done from the bottom, you can use the `--crop-botto

## 🥽 Render VR Video

Stereo equirectangular rendering for VR video is supported as VR180 and omni-directional stereo (360 VR) Nerfstudio camera types for video and image rendering.
Stereo equirectangular rendering for VR video is supported as VR180 and omni-directional stereo (360 VR) Nerfstudio camera types for video and image rendering.

### Omni-directional Stereo (360 VR)
This outputs two equirectangular renders vertically stacked, one for each eye. Omni-directional stereo (ODS) is a method to render VR 3D 360 videos, and may introduce slight depth distortions for close objects. For additional information on how ODS works, refer to this [writeup](https://developers.google.com/vr/jump/rendering-ods-content.pdf).
Expand All @@ -464,7 +489,7 @@ This outputs two equirectangular renders vertically stacked, one for each eye. O


### VR180
This outputs two 180 deg equirectangular renders horizontally stacked, one for each eye. VR180 is a video format for VR 3D 180 videos. Unlike in omnidirectional stereo, VR180 content only displays front facing content.
This outputs two 180 deg equirectangular renders horizontally stacked, one for each eye. VR180 is a video format for VR 3D 180 videos. Unlike in omnidirectional stereo, VR180 content only displays front facing content.

<center>
<img img width="375" src="https://github-production-user-asset-6210df.s3.amazonaws.com/9502341/255379444-b90f5b3c-5021-4659-8732-17725669914e.jpeg">
Expand Down Expand Up @@ -524,4 +549,4 @@ If the depth of the scene is unviewable and looks too close or expanded when vie
- The IPD can be modified in the `cameras.py` script as the variable `vr_ipd` (default is 64 mm).
- Compositing with Blender Objects and VR180 or ODS Renders
- Configure the Blender camera as panoramic and equirectangular. For the VR180 Blender camera, set the panoramic longitude min and max to -90 and 90.
- Change the Stereoscopy mode to "Parallel" set the Interocular Distance to 0.064 m.
- Change the Stereoscopy mode to "Parallel" set the Interocular Distance to 0.064 m.
Binary file added docs/quickstart/imgs/record_3d_export_button.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/quickstart/imgs/record_3d_ply_selection.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/quickstart/imgs/record_3d_video_example.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
32 changes: 29 additions & 3 deletions nerfstudio/process_data/record3d_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,23 +16,32 @@

import json
from pathlib import Path
from typing import List
from typing import List, Optional

import numpy as np
import open3d as o3d
from scipy.spatial.transform import Rotation

from nerfstudio.process_data.process_data_utils import CAMERA_MODELS
from nerfstudio.utils import io


def record3d_to_json(images_paths: List[Path], metadata_path: Path, output_dir: Path, indices: np.ndarray) -> int:
def record3d_to_json(
images_paths: List[Path],
metadata_path: Path,
output_dir: Path,
indices: np.ndarray,
ply_dirname: Optional[Path],
voxel_size: Optional[float],
) -> int:
"""Converts Record3D's metadata and image paths to a JSON file.
Args:
images_paths: list if image paths.
images_paths: list of image paths.
metadata_path: Path to the Record3D metadata JSON file.
output_dir: Path to the output directory.
indices: Indices to sample the metadata_path. Should be the same length as images_paths.
ply_dirname: Path to the directory of exported ply files.
Returns:
The number of registered images.
Expand Down Expand Up @@ -87,6 +96,23 @@ def record3d_to_json(images_paths: List[Path], metadata_path: Path, output_dir:

out["frames"] = frames

# If .ply directory exists add the sparse point cloud for gsplat point initialization
if ply_dirname is not None:
assert ply_dirname.exists(), f"Directory not found: {ply_dirname}"
assert ply_dirname.is_dir(), f"Path given is not a directory: {ply_dirname}"

# Create sparce point cloud
pcd = o3d.geometry.PointCloud()
for ply_filename in ply_dirname.iterdir():
temp_pcd = o3d.io.read_point_cloud(str(ply_filename))
pcd += temp_pcd.voxel_down_sample(voxel_size=voxel_size)

# Save point cloud
points3D = np.asarray(pcd.points)
pcd.points = o3d.utility.Vector3dVector(points3D)
o3d.io.write_point_cloud(str(output_dir / "sparse_pc.ply"), pcd, write_ascii=True)
out["ply_file_path"] = "sparse_pc.ply"

with open(output_dir / "transforms.json", "w", encoding="utf-8") as f:
json.dump(out, f, indent=4)

Expand Down
14 changes: 13 additions & 1 deletion nerfstudio/scripts/process_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,11 @@ class ProcessRecord3D(BaseConverterToNerfstudioDataset):
2. Converts Record3D poses into the nerfstudio format.
"""

ply_dir: Optional[Path] = None
"""Path to the Record3D directory of point export ply files."""
voxel_size: Optional[float] = 0.8
"""Voxel size for down sampling dense point cloud"""

num_downscales: int = 3
"""Number of times to downscale the images. Downscales by 2 each time. For example a value of 3
will downscale the images by 2x, 4x, and 8x."""
Expand Down Expand Up @@ -101,7 +106,14 @@ def main(self) -> None:
)

metadata_path = self.data / "metadata.json"
record3d_utils.record3d_to_json(copied_image_paths, metadata_path, self.output_dir, indices=idx)
record3d_utils.record3d_to_json(
copied_image_paths,
metadata_path,
self.output_dir,
indices=idx,
ply_dirname=self.ply_dir,
voxel_size=self.voxel_size,
)
CONSOLE.rule("[bold green]:tada: :tada: :tada: All DONE :tada: :tada: :tada:")

for summary in summary_log:
Expand Down

0 comments on commit 0ced5ce

Please sign in to comment.