Posts Tagged ‘3D’
Last night, 3 days before the final presentation, I have reached a state in my prototype that I am really happy with. The tracking stability is even better than expected with the new LED pen. Also I added a basic graphical user interface and some cool features. But look yourself, I am explaining it in this 5 minute video:
I found this nice plugin for vvvv called “roentgen” by Sebastian Gregor and implemented it into my 3D tracking prototype. I looks great and gives the prototype a whole new level. I was not sure if I’ll be able to prototype this idea, so I am really happy that it worked in less than a day!
Here are some screenshots… I’ll shoot some videos later when I’ve implemented some more features.
I was thinking about printing out an actual scan of a deformed child’s skull in real size in 3D. Its part of my augmented 3D interface for radiologists to report and discuss their findings. Unfortunately its still very expensive to print in 3D, but I was messing around with the generated 3D model a bit…. slicing it up was a great idea, because i am saving a lot of support material during the print. So I am down to 2000 sek , thats about 200 euros (from originally 10000 sek).
I think 3D printing will become very cheap and quite common in the near future. Having your personal 3D printer at home and printing at prices similar to 2D paper prints will definitely be around at some point. Therefor I think its not a big deal to suggest 3D printouts for some cases in the radiology department. Especially since we are scanning 3 dimensional bodies, its only natural to look at them in 3D and also touch them. Touching could especially be interesting if the 3D printout is actually a print of human tissue, and so behaves and feels exactly like the part in the patients body. Imagine the radiologist can print out the tumor he/she identified and give it to the surgeon. The surgeon can see and feel the tumor and use it as reference during the surgery – this idea was inspired by an actual fact during some surgeries. There are some surgeries nowadays in which the removed tissue is send back to the radiology department, so the radiologist can scan and compare it to the initial body scan and see if everything was removed as planned.
Printing human tissue has already been done. There is a lot of examples in medicine research to grow bones and living organs to replace damaged body parts in human bodies. Also there are some examples in the arts using “artifical” human tissue. See yourself:
Individually Manufactured Replacement Bones in Clinical Trial (medGadget)
A new way to print bones (ZDNet)
Thumbs up for 3D bone printer (NewScientist)
Print me a heart and a set of arteries (NewScientist)
Printing Organs on Demand (Wired)
Tiny Doll made of living cells (PinkTentacle)
Biojewellery (jewellery made by growing bone from your own DNA)
Thanks to Matt and Mikko, both their projects here at Umea Institute of Design are really inspiring for this concept.
3D-Doctor a 3D imaging software is already using normal 3D printers to print out models of bones from body scans. Its not real bone tissue… but still very interesting.
Advanced Custom Made Implants uses the same technology to print custom made implants, but also no real bone tissue.
and here is an interesting article about reproducing the natures complex internal bone structure to produce strong artifical bones.
This is my latest 3D tracking prototype… I added 2 colored leds to the pen to track its position and rotation in space. Also I’ve implemented a 3D model of an actual CT scan… The pen has two different functions at this moment. You can point at the 3D model. And you can switch into a viewing mode in which the pen behaves like a camera, so you can easily zoom and view the (virtual) skull from all directions.
close up photo of the pen…. assembled with electrical tape. You can also see the digital projection of the pen on the lower screen (that is supposed to be the touchscreen)
I talked about the reporting part of the radiologist before. Nower days reports from the radiology department are very low fidelity because of technology constrains. The current PACS system are often not able to handle more than plain text and images. In my concept for the reporting part I am thinking of 3D prints… printing out specific body parts (from the alanysed body scan) could be helpful to communicate the findings. Especially in scenarios with surgeons and orthopedic surgery this could be very helpful.
This is why I want to print a 3D model of a deformed skull. I generated the 3D model from an actual CT scan with OsiriX. But even though the skull is from a kid its still a big object for 3D printing. I’ve got an estimated of more than 3 kgs of material, which would result in a price of 10000 sek (ca. 1000 euros). So I am trying to reduce the amout of material now in different ways… I have some experience with 3D modeling from my Industrial Design studies, but thats a long time ago. I having a lot of fun to play around with the the 3D applications and more importantly its actually really inspiring for my project. There is a lot of different ways in navigating and manipulating 3D models in the different applications, and I think I could get some ideas about how to use these methods for 3D viewing of medical scans. So I’ve tried AliasStudio, Cinema4D, Rhino and SolidWorks and will see see if I can implement some of these ideas in my concept. Also this reminded me of the Adobe Photoshop Extended version for medical image viewing. Maybe there could be a extended version of a 3D modelling tool for radiologists?
Here are some pictures of the 3D file I am working with right now. It’s a very complex file with a lot of polygons, which makes it hard to work with.
And here a close-up view from the inside of the skull… (lots of polygons and points!)
This is a prototype for pointing and drawing in a 3D space. I think this could be helpful for annotating and marking 3D body scans.
I used a simple color tracker to detect the pink top of the pen. 1 webcam is tracking x and y position, the 2. webcam is tracking the z position of the pink-pen-top. But you could also imagine this working with tracking a finger or even multiple fingers.
I think it is a very interesting to make these 3D interface explorations… but how reasonable are they for the radiologists work? At this stage of the 3D concepts I still see a lot of problems.
In my last meeting with my mentor Mine we discussed the idea of the physical-cube-interface. Mine had a good point by mentioning the difference between looking at a 3D model on your own to the situation in a video conference. I think the augmented reality approach (in which you see the actual video stream with your own hands and the cube, augmented with the digital 3D model) is great for discussing a case online in a video conference, but doesn’t make much sense while analyzing a case on your own. If you are on your own, you don’t really need to see your own hands on the screen.
So I think there should be 2 concepts – one for working alone and another one for discussing and showing a case online.
In the second version of my physical-cube-interface prototype you can not see the video stream. I tried to make the tracking more stable and also implemented some of my initial ideas for viewing actual radiology data. This is still a very early prototype but I have the feeling that I am on the right track here. While playing around with the prototype I become more familiar with the advantages and problems. It already feels really natural to view a 3D model. But I think the other ideas I’ve implemented (pressure sensor, move cube left/right to change transparency) need some more consideration. At this stage there is too much things happening at the same time.