Systems And Methods For Generating Of 3D Information On A User Display From Processing Of Sensor Data For Objects, Components Or Features Of Interest In A Scene And User Navigation Thereon

Patent No. US11935288 (titled "Systems And Methods For Generating Of 3D Information On A User Display From Processing Of Sensor Data For Objects, Components Or Features Of Interest In A Scene And User Navigation Thereon") was filed by Pointivo Inc on Jan 3, 2022.

What is this patent about?

’288 is related to the field of remote inspection using sensor data, specifically addressing the challenge of visualizing and interpreting sensor data acquired for objects of interest in a scene. Traditional methods often involve either in-person inspections, which can be dangerous or limited, or remote viewing of previously captured data, which can suffer from data gaps and a lack of real-time context. The patent aims to bridge the gap between these approaches by providing an improved methodology for visualizing sensor data on a user's display.

The underlying idea behind ’288 is to create a synchronized multi-viewport display that allows a user to navigate a 3D representation of a scene while simultaneously viewing other relevant data types, such as 2D images, that are dynamically updated based on the user's viewpoint. This is achieved by registering different data types to a common coordinate system and then processing the data to generate an object-centric visualization that provides contextually relevant information based on the user's interaction with the scene.

The claims of ’288 focus on a method comprising providing a stored data collection associated with an object of interest, generating an object information display in a single user viewport, navigating a scene camera to generate a user-selected positioning relative to a 3D representation of the object of interest, and updating the object information display in real time as the scene camera is being navigated. The updated object information display includes an object-centric visualization of the 3D representation of the object of interest derived from the user's positioning of the scene camera relative to the 3D representation as appearing in the single user viewport, and is provided with a concurrent display of at least one additional data type.

In practice, the invention allows a user to virtually explore a scene, such as a cellular tower or a commercial roof, and obtain detailed information about specific objects or features. For example, a user could navigate a 3D point cloud of a cellular tower and, as they focus on a particular antenna, the system would automatically display high-resolution 2D images of that antenna from various angles. This allows the user to inspect the antenna for damage or other issues without having to physically climb the tower.

’288 differentiates itself from prior approaches by synchronizing different data types and providing an object-centric visualization that is dynamically updated based on the user's viewpoint. This allows the user to seamlessly integrate different types of information and generate contextually relevant insights. Furthermore, the system can infer user intent based on their navigation and positioning of the scene camera, and use this information to further refine the visualization and provide more relevant data.

How does this patent fit in bigger picture?

Technical landscape at the time

In the late 2010s when ’288 was filed, image capture and processing technologies were rapidly advancing, at a time when remotely operated vehicles such as drones were increasingly used for tasks like object inspection. Systems commonly relied on RGB images and other sensor data to generate 3D information, but hardware or software constraints made it non-trivial to provide users with contextually relevant information derived from this data in real-time, especially when separating data capture from data analysis.

Novelty and Inventive Step

The examiner allowed the claims because previously cited references do not disclose the claims as amended. The application presents object-centric exploration techniques for handheld augmented reality that allow users to access information freely using a virtual copy metaphor.

Claims

There are 11 claims in total, with claim 1 being the only independent claim. Independent claim 1 is directed to a method of remotely inspecting a real-life object using previously acquired object-related data. The dependent claims elaborate on the method, specifying details and variations of the data types, display configurations, user navigation, and information derived from the inspection.

Key Claim Terms New

Definitions of key terms used in the patent claims.

Term (Source)Support for SpecificationInterpretation
Object-centric visualization
(Claim 1)
“In notable differences from prior art vs. the systems and methods herein that provide concurrent 2D and 3D information display methodologies when the user navigates through a 3D rendering of a scene, the systems and methods herein are configurable to display 2D and 3D sensor-derived data that is generated via an object-centric approach that results from an inference of user intent derived from user navigation through and around the 3D rendering vis-à-vis his scene camera. As such, the generated user display incorporating the at least two viewports of the scene and any objects or features therein can be derived in the present disclosure from the context of the scene and/or the surface(s) and/or the object(s) as present in the 3D information, such as the nature and characteristics of the object(s) and the likely information that the user may be seeking from his navigation and positioning of his scene camera through and around the visualization of 3D information displayed to the user in real time.”A visualization of the 3D representation of the object of interest derived from the user's positioning of the scene camera relative to the 3D representation as appearing in the single user viewport.
Object information display
(Claim 1)
“Aspects of the present disclosure are related to visualization of acquired sensor data for objects, components, or features of interest in a scene. The data can include 2D and 3D information obtained from or derived from sensors. In one aspect, among others, a method comprises providing, by a computer, a first sensor data collection associated with a first object in a scene or location.”A display generated by the computer in a single user viewport, comprising a first data type and at least one additional data type present in or derived from the stored data collection, including a 3D representation of all or part of the object of interest, wherein the data types are synchronized.
Scene camera
(Claim 1)
“In an implementation, the user can navigate and position his scene camera to select a location or area on the base viewport or one or more additional viewport presented on his display via clicking or otherwise activating a pointing device (e.g., mouse, pen, finger, etc.) that is itself being deployed by the user to navigate around the at least two viewports that comprise the object, feature, scene, or location of interest. Alternatively, the user's scene camera can locate the user in relation to the base viewport and any concurrently displayed at least one additional visualization, and a separate device (e.g., pen, finger) can be used to identify a location or area of interest on the viewport.”A virtual camera that a user navigates to select a positioning relative to the 3D representation of the object of interest as displayed in the single user viewport.
Stored data collection
(Claim 1)
“In one aspect, among others, a method comprises providing, by a computer, a first sensor data collection associated with a first object in a scene or location. The first sensor data collection can be generated from one or more sensor data acquisition events and the first sensor data collection comprises synchronized sensor data including one or more sensor data types. The first sensor data collection is generated by transforming all sensor data in the first sensor data collection into a single coordinate system; or calculating one or more transformations for sensor data in the first sensor data collection, wherein the one or more transformations enable representation of the sensor data in the first sensor data collection in a single coordinate system.”A collection of data associated with an object of interest in a real-life scene or location, comprising at least two different data types, one of which is or is derived from 2D aerial images acquired by a UAV.

Litigation Cases New

US Latest litigation cases involving this patent.

Case NumberFiling DateTitle
8:25-cv-00576Mar 10, 2025Pointivo, Inc. V. 5X5 Technologies, Inc.

Patent Family

Patent Family

File Wrapper

The dossier documents provide a comprehensive record of the patent's prosecution history - including filings, correspondence, and decisions made by patent offices - and are crucial for understanding the patent's legal journey and any challenges it may have faced during examination.

  • Date

    Description

  • Get instant alerts for new documents

US11935288

POINTIVO INC
Application Number
US17567347
Filing Date
Jan 3, 2022
Status
Granted
Expiry Date
Dec 1, 2040
External Links
Slate, USPTO, Google Patents