 Neural radiance fields, nerfs, can be trained using a set of camera poses and associated images as input to estimate density and color values for each position. This position-dependent density learning is of particular interest for photogrammetry, enabling 3D reconstruction by querying and filtering the nerf coordinate system based on the object density. Traditional methods such as structure from motion are commonly used for camera pose calculation in pre-processing for nerfs, but the Microsoft HoloLens offers an interesting interface for extracting the required input data directly. We propose a workflow for high-resolution 3D reconstructions almost directly from HoloLens data using nerfs. Two approaches are considered, internal camera poses from the HoloLens trajectory via a server application, and external camera poses from structure from motion, both with an enhanced variant applied through pose refinement. Results show that the internal camera poses lead to nerf convergence with a peak signal-to-noise ratio, PSNR, of 25 dB with a simple rotation around the x-axis and enable a 3D reconstruction. This article was authored by M. Yeager, P. Hubner, D. Hates, and others. We are article.tv, links in the description below.