Although there are no existing biomarkers for Parkinson’s, diagnosis may just be a phone call away.
Posts Tagged ‘biomarker’
In a paper recently published in BMC Medical Genomics, scientists from Sandra L Rodriguez-Zas’ lab at the University of Illinois have identified a cohort of biomarkers that help predict survivability of patients who are afflicted with the aggressive malignant Glioblastoma multiforme brain tumor. The study also found that survivability varies between different genetic profiles and that factors such as race, gender and therapy may have a significant impact upon the survival and quality of life of individuals afflicted by glioblastoma multiforme.
Read the rest of this entry »
A native of Trinidad and Tobago, Dr Aldrin Gomes decided to specialize in the biochemical differences between normal and diseased hearts in his graduate studies at the University of the West Indies. A significant portion of his research utilized protein separation and purification techniques, making SDS-PAGE electrophoresis and protein blotting an essential tool in most of his experiments. But in a country where resources are scarce, unrelenting heat is destructive to sensitive biological materials, and slow delivery times are common for product orders, Gomes and his colleagues began to analyze the electrophoretic workflow to determine how to make the process as efficient as possible. “Because we performed a lot of electrophoresis, we published some articles whereby we looked at how we could standardize things to improve resolution in our results,” says Gomes. “We heavily researched aliquoting methods, sample buffers (methods for making them and determining actual shelf lives), whether or not buffers can be reused — even minor factors such as gel pouring techniques and plate thickness.”
Since Gomes “grew up,” scientifically speaking, in an environment where experiments must be planned far in advance and resources cannot be wasted, he cultivated the habit of designing experiments and procedures that made the best possible use of tools and time while ensuring optimal results. This followed him through his graduate work and his subsequent career, first as a research associate, then in his current role as assistant professor in the Neurobiology, Physiology and Behavior department in the College of Biological Sciences, and Physiology and Membrane Biology department in the School of Medicine at UC Davis.
For the first time ever, scientists are using computers and genomic information to predict new uses for existing medicines.
A National Institutes of Health-funded computational study analyzed genomic and drug data to predict new uses for medicines that are already on the market. A team led by Atul J. Butte, M.D., Ph.D., of Stanford University, Palo Alto, Calif., reports its results in two articles in the Aug. 17 online issue of Science Translational Medicine.
Butte’s group focused on 100 diseases and 164 drugs. They created a computer program to search through the thousands of possible drug-disease combinations to find drugs and diseases whose gene expression patterns essentially cancelled each other out. For example, if a disease increased the activity of certain genes, the program tried to match it with one or more drugs that decreased the activity of those genes.
Below is a talk that Dr. Buttes gave recently at Packard Children’s Hospital where he explained some of the amazing work done in his lab.
Click here to read more.
Yesterday, we told you about a study that found that family physicians are ill-prepared when it comes to diagnosing and treating patients based on their genomic data. As a follow up to that story, I’d like to bring your attention to a recent post by W. Gregory Feero, MD, PhD on KevinMD which talks about the overwhelming growth of genomic data and how the pace of discovery is far exceeding the capacity of the health care system’s IT infrastructure.
According to Dr. Feero, medical record keeping in the United States is a far cry away from being able to house the hundreds of petabytes of genomic data that will eventually need to be stored in their systems. Furthermore, upgrading to compatible systems are bound to be prohibitively expensive. He also postulates that the falling cost of genome sequencing might make it cheaper to sequence individual data on an as-needed basis as opposed to storing the data en-masse.
For further reading visit Data overload and the pace of genomic science