Posted by: viscenter | May 23, 2012

The Vis Center Welcomes Dr. Ruriko Yoshida

Students in math classes often complain that they will never use their mathematical knowledge outside of school.  They may balance a checkbook, but will statistics change the world?  Dr. Ruriko Yoshida uses statistics to solve real world problems such as how diseases mutate, how to optimize resources, how to optimize evacuation plans in a case of emergency, and how to develop therapies for special needs children. She studies statistical analysis of genetics, optimization problems, and applications of graphical models.
 
Dr. Ruriko Yoshida recently joined the faculty of the Vis Center, but has worked in the Statistics Department at the University of Kentucky for the past six years.  She began collaborating with Dr. Samson Cheung on a project to optimize the placement of cameras for a security system to use as few cameras as possible, balancing affordability with functionality. Dr. Cheung’s research also involves optimization problems and applying graphical models.
 
At the Vis Center, Dr. Yoshida will work with Dr. Cheung on his mirror-imaging project.  Using a computer image as a mirror image is a useful learning tool for autistic children.  The image on the computer “mirror” can be modified to help the child learn via video self-modeling.
 
Dr. Yoshida applies the same optimization methods and graphical models across disciplines.  She is able to use statistics to answer questions in biology, technology, and education.  Her research improves the lives of others.  She said, “I want to do something good in this society.  So I love actually applying some mathematical statistical methods.”
 
To learn more about Dr. Ruriko Yoshida, press here.

Advertisements
Posted by: viscenter | May 14, 2012

Seeing Math in a New Way

For most freshmen, the easiest part of Math 109 is looking at a graph. Actually understanding how the graph relates to the equation may be another story, but seeing the points and general shape of the graph causes no problem.  However, Haden Pike, a visually impaired freshman studying computer science, has the opposite problem – “I understand what the function was, but exactly how it was graphed, I had no idea.”

As a computer science student, he must take certain math classes. Haden had a math tutor, but he needed a way to visualize the graphs when he was studying on his own.  The Math Department enlisted Bill Gregory of the Vis Center to develop a way to help Haden succeed in class.

Originally, Bill wanted to use the Vis Center’s 3D printer to make a 3D print of each graph.  However, it took two or three hours to print one graph, so it was not a practical way to help Haden visualize the graphs.  Instead, Bill used a laser cutter and GeoGebra, a freeware program, to generate a graph.  The line of the function was indented, so Haden could run a pen along the indentation and gather enough information about the graph to understand the function’s graph.

With Haden relying totally on tactile feedback, Bill needed to work out some kinks.  The extra grid lines often confused Haden, so Bill made these lines very faint.  Haden also had trouble finding the graph’s origin, so Bill put a hole on the graph to denote it.
Haden says, “It helped me understand visually what the expression was.” The Vis Center’s models allowed Haden to review on his own time; he could refer back to his notes and have a physical representation of the graph examples from class.  Haden is considering a career in teaching computer science; he said, “So along with computer science that I also enjoy, why not teaching?”

Sen-Ching Cheung (left, with a doctoral student) used his work on a video-surveillance project for the Department of Homeland Security to come up with a way of improving video self-modeling, a teaching method for autistic people.

Sen-Ching Cheung, an associate professor of electrical and computer engineering at the University of Kentucky, never expected to become an autism expert.

But Mr. Cheung, the father of a 5-year-old boy with autism­, has seen his career take a twist that mirrors the unpredictable nature of the disease itself: He is putting his digital-imaging skills to work on what he hopes will be a promising technological therapy for autistic children. He is one of a number of scientists seeking federal support for their approaches to autism research, which has an increasingly vocal public constituency and is nearing what could be crucial advances.

Read the rest of the article here.

Posted by: viscenter | April 30, 2012

Integrating Digital Papyrology (IDP)

Learning how people lived during ancient times requires piecing together clues likes a jigsaw puzzle. One good source of these clues is the bits and pieces of papyri that have been preserved across centuries. These bits of papyrus may contain a shopping list, a land contract or other information that tells us how these ancient people lived their day-to-day lives.

However, studying these various papyri has been a great challenge given their fragility and difficulty of access. Recently Vis Center researchers collaborated with a team from Duke University to create a new online system for papyrological research. Dr. Joshua Sosin from Duke University and Ryan Baumann from the University of Kentucky were part of the team that worked together on the project called Integrating Digital Papyrology (IDP). The final product is an online editing system for collaborative editing.

The greatest challenge of this project was to make the system user-friendly. In order to create the editing tools, the team had to create a new programming language called Leiden+ which combines XML and papyrological markup language. The system also allows for translation edits for each papyri and for other notes to be made. The user submits the changes to a board that then authorizes the changes to be made.
Allowing easy access for researchers to communicate about revisions for the text accelerates the pace of research. The team hopes that the online system will replace the slow pace of print mechanisms for publishing these papyi. Dr. Sosin points out that given the rarity of these papyri that “every bit of data is deadly precious” which means the online system presents a real opportunity for deepened research for the e-papyrological community.

Posted by: viscenter | April 16, 2012

“The Ascending Journey” Premieres on May 13 on KET

Click here to view the trailer.

Nancy Clauter’s world was changed forever the day she heard three little words: “You have cancer.” As a music professor at the University of Kentucky and principal oboe with the Lexington Philharmonic, the diagnosis of non-curable cancer meant not only facing mortality, but also the loss of her ability to communicate through music.

The Ascending Journey, a 30-minute documentary, follows Nancy’s journey from her diagnosis of a rare form of blood cancer called multiple myeloma, through chemotherapy and groundbreaking stem cell transplant therapy. As she battles cancer, Nancy fights to continue to play and inspire others through her strength and her song.

“The Ascending Journey” will air six times in May. It will premiere on May 13, at 4 am and 10:30 pm.
The upcoming airdates are
KET: Sunday, May 13 at 4:00 am EDT
KET: Sunday, May 13 at 10:30 pm EDT
KET2: Thursday, May 17 at 10:30 pm EDT
KET2: Monday, May 21 at midnight EDT
KETKY: Monday, May 28 at 5:00 am EDT
KETKY: Wednesday, May 30 at 9:30 pm EDT

To learn more about our Media For Research Lab please click here.

For more information contact Julie Martinez: jmartinez@engr.uky.edu

Over one million children in the United States have voice disorders. These problems typically begin in childhood, and therefore can disrupt critical periods in development. Rita Patel, Ph. D., Kevin Donohue, Ph. D., and Harikrishnan Unnikrishnan study vocal fold motion in children. Vocal fold vibratory motion is needed for producing speech. They presented their paper “Analysis of high-speed digital phonoscopy pediatric images” at the XX Annual Pacific Voice conference on Optical Imaging, Therapeutics, and Advances Technology in Head and Neck Surgery and Otolaryngology held with the SPIE Photonics West Moscone Center in San Francisco, CA this January. They were presented with the Pacific Voice & Speech Foundation 2012 Award for Best Scientific Paper.

Vocal fold dysfunction limits the ability to speak and interact in society. Unfortunately, technical limitations have held back research into vocal fold motion, which is vital for measuring treatment outcomes. However, development in high-speed video systems created new research opportunities in vocal fold motion for more efficient diagnosis and treatment. Dr. Donohue pointed out that “This work is one of the first to describe and assess the processes for extracting quantitative information from high-speed video recordings of children.” They custom built a laser system to use alongside high-speed digital imaging to explore the relationship between the immature vocal system and the formation of vocal fold nodules.

Dr. Patel and Dr. Donohue hope that their research will help children suffering from vocal fold dysfunction. Dr. Patel added that “the goal or our research is to establish physiological biomarkers of unique vibratory features of vocal development with high speed digital imaging and to lay the foundation for development of biomechanical modeling and assessment tools” to detect at-risk children.

Jonathan Soli, an undergraduate who worked at the University of Kentucky’s Vis Center in the summer of 2011, recently won the Twin Cities IEEE Paper Competition and will move on to the IEEE Region 4 Student Paper Competition. Soli is a student at Hamline University. UK professor Kevin Donohue advised Soli on Soli’s Electrical and Computer Engineering REU project, “Verification of Simulated Acoustic Environments Utilizing Cross-Correlation and Power Spectral Density,”

“Imagine a noisy room full of people conversing and, with a hidden microphone array, having the ability to covertly focus on a specific conversation of interest,” Soli pointed out. Professor Donohue researches how to block extraneous sound and allow focused listening using microphone arrays.

Soli compared a computer simulated acoustic environment to a real acoustic environment. Propagation delays, secondary echo timing, and the sound decay as the sound reverberates through the room were the metrics most relevant for Soli’s project.

His project helps researchers know how to improve the computer simulation of acoustic environments. Soli established the current performance of the computer simulation software, so that now researchers can focus on improving the software quality. The simulator simplifies research on acoustic environments by simplifying optimization studies and avoiding the hassle of setting up a multitude of physical microphones.

At the Twin Cities competition, Soli won a cash prize and a spot in the IEEE Region 4 Student Paper Competition. He will travel to Indianapolis, IN for the competition on May 5th; the IEEE competition will be held in tandem with the 2012 Electro/Information Technology (EIT) Conference through IUPUI.

Homer’s Iliad is back at the publishing house, but turning these pages involves only a light tap on an iPad screen. With each digital page turn, the Imaging the Iliad iPad app transports the revered, but fragile, Venetus A Iliad manuscript from an inaccessible Venetian library into the hands of students, researchers, and classical enthusiasts around the world.

A screen shot of the Imaging the Iliad iPad app released March 10, 2011.


A page from the 1901 Comparetti images of the Venetus A


During the summer of 2007 researchers from the University Of Kentucky Center for Visualization, University of Houston, College of the Holy Cross, Furman University, and Brandeis University gathered in Venice, Italy at the Marciana Library to digitally preserve the Venetus A. Considered by some to be the most important manuscript of the Homeric stories, the Venetus A also contains layers of commentary and annotations, usually attributed to scholars at the Royal Library of Alexandria.

The only previous images had been made in the 1901 by Domenico Comparreti, but the process was highly destructive since the manuscript was sliced apart, placed on glass and photographed, and then rebound. In contrast, the modern process allowed the intact manuscript to be gently placed in a Meyer Conservation Copystand. Page by page, they carefully scanned the ancient manuscript, capturing both high quality digital photos and structured light data to create a 3D model of the surface, which can then be used to digitally “flatten” the manuscript and remove distortions from the text. (Click here to read the 2008 Odyssey article about the project)

The Venetus A being prepared for a scan at the Marciana Library in 2007.


The photos were then made publicly available through the University of Houston’s Homer Multitext data archive. But the Vis Center team had plans to use an undergraduate research team to make the Iliad accessible to a much broader audience.

Undergraduate students, Zach Whelchel and Carla Lopez Narvaez did research the summer of 2010 at the UK Center for Visualization. Their assignment was to create an iPad app that would allow the reader to interact with the Venetus A Iliad as well as an English translation. “The project was an ambitious one that was just concrete enough to be possible,” said Whelchel. “Our team was given a lot of space to envision how to best display the folio images.”

Whelchel and Narvaez present their work on the Iliad app.


The team was given the 3D Iliad images, the corresponding Greek text, and the English text of the Iliad. “The images had already been matched up with corresponding Greek text, but making that correspond with the English transcription was quite difficult, conceptually,” said Ryan Baumann, Vis Center staff who oversaw the student work. Over the course of the summer they worked to create an iPad app that would allow the reader to read the English text side by side with the corresponding folio of the Venetus A. Whelchel said that “to do this we compared two XML documents. The first had the line found on each folio (Ex: Book 1, Lines 32-56) and the second had the entire Iliad (in English) tagged by books and lines.”

“We wanted to build the app as a template that could eventually encompass other texts. Because of this, we took the long route on parsing through the folios to match the lines properly,” said Whelchel, a sophomore Media Communications and Math double-major at Asbury University in Wilmore, KY. “We had to build an intuitive way to ‘page through’ the book. We wanted it to feel like you were actually turning a page so the user could better interact.” Most surprising was “the level of complexity that goes into every page turn.”

Narvaez, a Computer Science student at the University of Puerto Rico – Rio Piedras, said their problem was “how to bring ‘The Iliad’ from the oldest form of print to the newest form of print on the iPad.” Narvaez interned at the Vis Center through the Vis U program, which brings Computer Science undergraduates from the University of Puerto Rico for summer research opportunities in visualization and virtual environments. “This new experience helped me…to work with new people and combine all our ideas…to manage and resolve the problems we found each day during the process of our research and…to keep learning new things,” said Narvaez.

Narvaez working on the 'Imaging the Iliad" iPad app as a part of Vis U 2010.

Dr. Chris Blackwell, Classics professor at Furman University was part of the Venetus A imaging team in 2007. As a member of the Homer Multitext project through the Center of Hellenic Studies at Harvard, Dr. Blackwell has worked for over a decade bringing the words of Homer to new life in electronic media. He has found the Imaging the Iliad app to be an exciting means to do just this. “This iPad app is a beautiful example of where all such projects are going, and the pleasant surprises that lie in store. When we started thinking about giving these manuscripts life electronically, no one dreamed of a touch-based, lightweight, vastly capable and delightfully simple device like the iPad. To see images and text brought together–so quickly!–by the researchers in Kentucky is truly inspiring. The current application is all the proof anyone needs that the work of digitization will serve not only high-end scientific research, but will invite a very wide audience to share in these cultural treasures. As a Classicist, I find this thrilling!”

Few people have the privilege of traveling to the Marciana Library in Venice and studying the actual Iliad folios. But only a month after its March release, the Imaging the Iliad app has already sold more than 800 copies. It is available for free download in the Apple iTunes App store.

'Imaging the Iliad" app allows users to search and bookmark the text, as well as closely examine the high-res folio images.


“The Iliad app brings one of the oldest mediums of communication to one of the newest. This readily accessible preservation of history and culture will hopefully set the standard of how scholarly research should be published,” said Whelchel. Next, the team is “currently working on a 3D viewer that shows off the models we have of each folio. It really brings the ancient book to life when you can spin it around and see the fine creases.”

Posted by: viscenter | March 16, 2011

Puerto Rican Undergraduates Experience Research At UK

Dr. Seales presents to a group of students at the University of Puerto Rico

While the days are still winter gray in Kentucky during February, in Puerto Rico the sun is shining and a soft wind blows off the Atlantic over the capital city of San Juan. On the campus of the University of Puerto Rico near the center of the city, a group of computer science students are meeting with two members of the Vis Center to learn about internship opportunities at the University of Kentucky.

For the past ten years, computer science students from the University of Puerto Rico have spent summers on the campus of the University of Kentucky gaining valuable research skills as well as cross-cultural experience. Vis Center Director and Computer Science Professor, Dr. Brent Seales first visited the island of Puerto Rico in 2000 to begin recruiting students for summer undergraduate research opportunities. Since then about thirty computer science students from the University of Puerto Rico have done research in visualization, networking and other computer science research areas at UK.

In the summer of 2010, the Vis Center launched its VisU program, a summer undergraduate research opportunity for Kentucky and Puerto Rico students. Six students participated in the program completing research projects that ranged from medical imaging and digital humanities applications to iPad app development.

Students at the University of Puerto Rico

This summer, the VisU program is expected to have between six and eight students working on summer research projects. These students will gain both valuable experiences for themselves as well as contributing meaningful work to the research team. Carla Lopez Narváez, one of the University of Puerto Rico students involved in the 2010 VisU program explained her experience this way: “This experience helped me learn how to work together with new people in order to manage and solve the problems we found during the process of our research. We learned how to apply the things we learned to our lives as well as to keep learning new things in order to accomplish our research and become more professional. I would love to do more research in the future!”

For more information on the VisU program please visit www.vis.uky.edu/visu

Posted by: viscenter | January 24, 2011

Innovative Technology Goes on Stage with UK Opera

The Vis Center’s new innovative high-definition projection technology, originally developed for non-theatrical use, will be used for the first time in a theatrical setting for the UK production, followed by the Atlanta Opera production.

The technology was originally developed at the Vis Center through a partnership with Fort Knox. Its initial application was for the military with the goal of building rapidly deployable, high resolution screens to be used in training or battle. Other potential uses include any environment that needs the mobility and convenience of a display from schools to museums and medical applications.

The projectors are mounted to the scaffolding in a system that also frames the screen.


While front and rear projected backdrops are nothing new to theatre, they can cause problems for the set design and for the performers. Normal front projectors can cast shadows and images onto the performers, and most rear projectors must be placed very far distances behind the screens to create a large enough image of scenery, which can limit the stage space. With the Vis Center’s new rear projection system, only four and a half feet separate the 54 projector units from their attached movable fabric screen units, which are an impressive 24×30′ and 24×15′.

The technology, coined by the Vis Center as SCRIBE (self-contained rapidly integratable background environment), utilizes a software system that blends the projections into one image, which will include still images and video related to the various scenes in the production.

This project grew out of the synergy that is possible through multi-disciplinary research collaboration. The Director of the Vis Center, Dr. Brent Seales came into contact with UK Opera Director, Everett McCorvey, through a chance meeting when they were both speaking at a luncheon hosted by Mrs. Patsy Todd. Both quickly grasped the possibilities of collaboration and over the next year the idea of using this technology as part of the opera production emerged.

A model for the screen stage design.


Dr. Seales states that this type of multi-disciplinary research is the goal of the Vis Center. “We plan to see more of these type of real applications of our technology continue to take place as we work with other researchers across the University in the future. The possibilities are amazing if you consider what research can do when people step outside of their regular environments to interact with those with a distinctly different background.”

Bill Gregory, lead engineer for the Vis Center, reflected on the value of applying his technical ability to the theatre production, “It’s been fascinating to work with the theatre crew. Being an engineer I am focused on the practical results and never look at the artistic aspect while they didn’t realize the technology that could be used to achieve their artistic ends. We didn’t know what problems existed for them and they didn’t know what to ask for until we collaborated.”

The footprint for the giant screens is about four feet in depth. This opens many new options for theatres with limited space backstage.


The images will depict real locations in Charleston, SC and the islands off the coast of North Carolina that were taken and edited by the Vis Center team. Actual hurricane footage from The Weather Channel will be used as well. Combining these projected images with a minimal amount of three-dimensional pieces of scenery will create a vibrant and exciting production.

A scene from the UK production of "Porgy and Bess"


The use of this projection system has already been drawing interest from other opera and theatre companies from around the country.

Read more about the production:

UK Opera

Herald Leader

UK Now

Older Posts »

Categories