A virtual colonoscopy starts with computed tomography (CT), a common diagnostic technology that uses X-rays to record cross-sectional, 2D images of the body’s interior. A 3D model is constructed by segmenting the colon from the rest of the abdomen and using an electronic cleansing algorithm to factor out fecal material. Next, doctors use visualization software to navigate a virtual fly-through of the colon. If a polyp or suspicious growth is found, doctors can perform a virtual biopsy and investigate further.
Virtual colonoscopy outperforms optical colonoscopy in certain ways, advocates claim. A University of Wisconsin study, for example, found it better at finding 8mm and 10mm polyps. It also outperforms optical colonoscopy in finding polyps hidden in folds and around corners of the twisting tube of the colon, and in reliably reaching the farthest reaches, called the caecum. It can also do something optical colonoscopy, by its nature, cannot do: spot polyps on the colon’s outer walls. Also, because it is noninvasive, a virtual colonoscopy avoids the risk of the rare but deadly tears or holes that can occur during an optical colonoscopy (and which can require immediate surgery).
On July 13, 2010, Stony Brook University received a $1.4 million National Science Foundation grant to build the "Reality Deck," an Immersive Giga-pixel Display in a 40' x 30' x 11' high room in Stony Brook University's Center of Excellence in Wireless and Information Technology (CEWIT) containing 308 LCD display screens driven by an 85-node graphics computing cluster that rivals the performance of modern supercomputers. It will fully immerse users in 1.25 billion pixels of information, approaching the visual acuity of the human eye, according to the project director, Arie E. Kaufman, Ph.D., Distinguished Professor and Chair of the Computer Science Department and Chief Scientist at CEWIT.
The Reality Deck will allow for incredibly detailed viewing of scans from the University Medical Center's new 320-slice computed tomography (CT) scanner for virtual colonoscopies and rapid diagnosis of acute chest pain patients. Other proposed projects include satellite imaging, nano electronics, climate modeling, micro tomography, survey telescopes for astronomical applications, detecting suspicious persons in a crowd, news and blog analyses.
The Lydia text analytics system builds relational models of people, places, and things through natural language processing of news, blog, and other web sources. The statistical analysis of entity frequencies and collocations enables us to track the temporal and spatial distribution of news entities: who is being talked about, by whom, when and where? We encourage the reader to check out our news (www.textmap.com) and blog (www.textblg.com) websites to see our analysis of text obtained from roughly 1,000 daily US and international online news sources and millions of blog postings. The Lydia news analysis project currently involves a total of four PhD and several masters students. Over the past four years we have developed 70,000 lines of software addressing all phases of news gathering (web spidering and text extraction), NLPbased analysis (entity extraction, pronoun resolution, and sentiment analysis), database management and statistical analysis (relationship identification, synonym set construction) and visualization (temporal, spatial, and network analysis).