home holos vIWB videos technology faq about contact

virtual interactive whiteboard

Technical

Questions


Business

Questions


faq

General

Questions

business general
How long can the pen go without recharge?
Current design parameters point to about 80-160 hours of usage depending on the average frequency for writing, pointing, clicking of the teachers/presenters. Assuming 8 hours of such usage for 5 days a week the answer is between two weeks and up to one month.
How do you charge the pen?
Either via a mini-usb to usb cable connecting with a laptop/PC or with a typical mobile phone charger from mains power. A charge from completely empty takes about 1-2 hours. For permanent installations there will be a simple charging cradle which also functions as the holder for the pen. This means the pen's charge is topped up automatically when not in use.
How big is the HOLOS sphere and could it be smaller?
In our current design the sphere is the size of a table tennis ball, i.e. 4 cm in diameter. It is light weight and robust. Sphere size directly trades off with camera resolution and working volume. With a low end camera resolution, i.e. VGA, table tennis ball size gives a decent working volume in front of a surface of up to 2 meters width. With a higher resolution for the camera, like for example the now common 720HD and keeping the sphere size the same, the volume width doubles to 4 meters, or, keeping the working volume the same, the sphere size can be reduced to about the size of a large marble perceptually less than a quarter the size of a table tennis ball.
Is this not the same technology as the Sony Wand controller with its glow sphere?
The resemblance is only superficial. The big difference is that Sony's sphere does not have a pattern. Without the pattern the orientation of the device cannot be detected with a camera and Sony needs accelerometers which exhibit integration drift error to compensate and don't give absolute measurements and also extra sender/receiver technology is needed. Despite their sphere being much larger, Sony's Wand did not nearly achieve the same positional and orientational accuracy, low latency and high detection frame rate in our tests compared with HOLOS.
How much does the pen weigh?
Current prototype weighs 54g and feels and handles like a normal whiteboard marker. The final design could be even lighter if required.
How accurate is the detection of the sphere and pentip?
Pentip location detection in 2D on the surface of the screen depends on camera resolution and surface area but typically 0.2 mm at 3 sigma can be achieved. For details check tables and figures here. The 3D pentip detection accuracy in mid-air depends on the distance from the camera, the camera's resolution and Angle of View but in a typical setup it's on the order of one to two mm. For details check tables and figures here.
The system works with IR - how robust is it against sunlight and other IR light sources?
We are using a new method in our optical filter and detection system, which makes it virtually immune to interference from other light sources including sunlight.
How much processor power does the tracking software need?
For the short time when a button is pressed or the pentip is activated, the detection software uses about 12% of a typical PC's processing power (at 60 detects per second), otherwise its only about 5%. We are currently working on an update where the processing is done on a low cost dedicated chip in the camera doing something akin to a compression algorithm which means non or only negligible processing power from the presenter's Laptop/PC is needed.
How much memory does the tracking software need?
The detection algorithm currently needs no more than 20MB RAM, but we expect it can be optimized to a fraction of that, if required.
How many spheres can you track at the same time? Can you distinguish between them?
At the moment we only track one pen in our demos however it's very easy to change the software to track two, three or even more, assuming they all have the same function. Distinction such that for example one sphere is always tracked as a pen and the other always as an eraser is also possible using different HOLOS patterns.
What kind of quality of camera does the system need?
All our demos apart from the short-throw demo currently work with a camera with a 60fps VGA sensor chip (less than 3 dollars in bulk) and a small M12 plastic lens for optics, i.e. standard webcam technology. For the short-throw setup a small, low cost fisheye lens and a chip with at least HD720 resolution is required which increases costs only slightly beyond typical webcam technology. Our proprietary tracking algorithm is able to extract accurate measurements even from distorted and slightly blurred images and therefore doesn't need to rely on expensive precision glass lenses typically required for accurate pattern detection in Machine Vision, Photogrammetry or Optical Metrology.
How far away from the camera can you still detect?
The smallest pattern we can pick up with good precision for position and orientation detection in a camera image is 13 pixels in diameter, if only position is required it can be as small as 7 pixels. How far away the HOLOS sphere can be from the camera to shrink to this size depends on pixel resolution and size of the sensor chip as well as the focal length of the optics. With a sufficiently narrow Angle of View (telescopic) the pattern could be picked up at high enough resolution from any distance. In practical terms the camera can be placed at any possible distance in even a very large room to track the sphere requiring only standard lenses and chip resolutions. Note that the distance does not influence the precision of the 2D tip detection on the screen. Accuracy of the 3D detection in mid-air does decline somewhat with longer distance setups however this can be counterbalanced by using a higher resolution chip, for more detail look here.
In some setups the camera tracks the sphere on the pen from behind the presenter - will he not easily occlude the sphere?
Occlusion would only be a problem if we were tracking the tip of the pen as other systems do. These other systems have to deal with the problem that the hand of the presenter frequently occludes the tip of the pen for all view-points apart from the ones high up and very close to the screen (ultra short throw projector position). This means for these other systems the camera position is very restricted and long throw or mobile solutions cannot be done this way. As we are uniquely tracking the back of the pen, occlusion by the hand of the presenter or by the pen itself cannot occur. Occlusion by the shoulder or the body of the presenter, is not a problem either, because the tracking camera essentially has the point of view of a person in the audience or better, i.e. more central and/or elevated and presenters or users naturally point and write in such a manner that what they write or do on the screen is visible to the audience and therefore also to the tracking camera. Note that in the demo clips tracking was done using long-throw and mobile setups and it can be seen that occlusion does not create a problem and writing/pointing/drawing is very natural.
Can the camera be at an angle and/or very close to the screen, or even be integrated into the frame of the screen?
The system can track from almost any angle because what is tracked is a sphere which looks the same from whatever direction you look at it, so tracking from the side (see a demo clip here) or even one corner of the screen is possible. Note that the closer the camera to the screen the more wide-angle the lens must typically be to cover the full range and the more extreme the lens distortion will be, however because our algorithm can deal with any kind of optical distortion (after calibration) without loss of speed and without increase of computational load there is no problem. For a demonstration of a close to screen, fish-eye lens setup look here.
How big is the detection volume?
The system can detect the sphere whenever it is in view of the camera up to a distance limit when the sphere becomes too small to be discernible by our algorithm. For simultaneous detection of position and orientation the sphere image can be as small as 13 pixels in diameter and as small as 7 pixels in diameter if only position detection is needed. The shape of the detection volume or "frustum" is like a pyramid tilted on its side with the tip of the pyramid being at the camera lens. This means the further away you are from the camera, the larger the area it can see. Given for example a camera with a HD1080 resolution and 28° horizontal Angle of View, both position and orientation of the sphere can still be detected at 12 meters distance, where the covered area will be 6 meters x 3.3 meters, for more detail look at Figure 5 here and Table 2 here. For position detection alone twice the detection distance can be achieved. The detection volumes in our current demos can be seen here.
Is full classroom interaction possible?
Yes. This can be achieved easily with a moderately higher resolution chip and a fisheye lens setup which simultaneously looks on the screen surface and out into the classroom as shown here. The pen can be turned around and a pupil or user in the audience can hold it and use it like a 3D games controller to write/point/interact from a distance. For this kind of interaction detection of the 3D position of the sphere alone is sufficient which means the detection can reach twice as far compared to when orientation of the pen has to be detected as well. This means the sphere in view of the camera can shrink to just 7 pixels in diameter which increases the reach of the detection right to the outer edges of typical classrooms without compromising the accuracy of detection on the surface of the screen at the front.

Technical

Questions


Technical

Questions