Chip-based vaccines; supercomputers hunt for supernovae; activism through algorithms.
Microfluidic cell-squeezing
MIT researchers have shown it is possible to use a microfluidic cell-squeezing device to introduce specific antigens inside the immune system’s B cells, providing a new approach to developing and implementing antigen-presenting cell vaccines.
These types of vaccines are created by reprogramming a patient’s own immune cells to fight invaders, and are believed to hold great promise for treating cancer and other diseases. At the same time, inefficiencies have limited translation clinics, and only one such therapy has been approved by the Food and Drug Administration.
Using a microfluidic device, MIT researchers were able to overcome a genetically programmed barrier to antigen uptake — by squeezing the B cells. Through “CellSqueeze,” the device platform originally developed at MIT, the researchers pass a suspension of B cells and target antigen through tiny, parallel channels etched on a chip. A positive-pressure system moves the suspension through these channels, which gradually narrow, applying a gentle pressure to the B cells. This “squeeze” opens small, temporary holes in their membranes, allowing the target antigen to enter by diffusion. This process effectively loads the cells with antigens to prime a response of CD8 — or “killer” — T cells, which can then kill cancer cells or other target cells.
In connected work, the researchers hope to ultimately create a whole class of therapies which involve taking out a patients own cells, telling them what to do, and putting them back into the patient’s body to fight a disease.
This work is happening at SQZ Biotech. After researchers developed CellSqueeze at MIT, then left to start up the company to further develop and commercialize the platform.
They envision a future system, if they can take advantage of its microfluidic nature, as a bedside or field-deployable device. Instead of shipping cells off to this big, centralized facility, it could be done in a hospital or doctor’s office.
Finding and studying supernovae
According to UC Berkeley researchers, Type Ia supernovae are famous for their consistency but new observations suggest their origins may not be uniform at all. Using a “roadmap” of theoretical calculations and supercomputer simulations, Berkeley Lab astronomers observed for the first time a flash of light caused by a supernova slamming into a nearby star, allowing them to determine the stellar system from which the supernova was born.
The researchers assert this finding confirms one of two competing theories about the birth of Type Ia supernovae. But taken with other observations, the results imply that there could be two distinct populations of these objects.
“By calibrating the relative brightness of Type Ia supernovae to several percent accuracy, astronomers were able to use them to discover the acceleration of the Universe. But if we want to push further and constrain the detailed properties of the dark energy driving acceleration, we need more accurate measurements. If we don’t know where Type Ia supernovae come from, we can’t be totally confident that our cosmological measurements are correct,” explained Daniel Kasen, an associate professor of Astronomy and Physics at UC Berkeley, who holds a joint appointment at the Lawrence Berkeley National Laboratory.
In 2010, he predicted a new way to test the origins of supernovae that used theoretical arguments and simulations run on supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC). With this, he showed that if a supernova is born in a binary star system, the collision of the debris with the companion star will produce a brief, hot flash of light. The challenge is then to find a Type Ia event shortly after it ignites, and quickly follow it up with ultraviolet telescopes. Using an automated supernova-hunting pipeline—the intermediate Palomar Transient Factory (iPTF), which uses machine-learning algorithms running on NERSC supercomputers—astronomers did just that. They found iPTF14atg just hours after it ignited in a nearby galaxy. Follow up observations with NASA’s Swift Space Telescope showed ultraviolet signals consistent with Kasen’s predictions.
iPTF is a scientific collaboration between Caltech; Los Alamos National Laboratory; the University of Wisconsin, Milwaukee; the Oskar Klein Centre in Sweden; the Weizmann Institute of Science in Israel; the TANGO Program of the University System of Taiwan; and the Kavli Institute for the Physics and Mathematics of the Universe in Japan. NERSC is a DOE Office of Science User Facility.
Peter Nugent, Berkeley Lab scientist and member of the iPTF collaboration noted, “We often talk about how computational science is the third pillar of the scientific method, next to theory and experimentation, this finding really brings that point home. In this case, we can see how computational models and tools are driving discovery and transforming our knowledge about the cosmos.”
Crowdsourcing for social good
Experts in data science are solving old problems in a new way by leveraging advanced algorithms. One such group of experts is an offshoot of Harvard — DrivenData — which frames pertinent questions about an issue to be solved, posts raw data online, and recruits a volunteer army of hundreds of the best data scientists to solve the puzzle. The person who creates the most predictive algorithm wins a cash prize and bragging rights in the data science community. All of the contestants get to exercise their creative skills—and get the satisfaction of knowing they are helping to address an important public need.
Using this crowdsourcing model, DrivenData said its goal is to unlock the potential of big data to help mission-driven non-profits and public sector agencies operate more effectively—and have more impact. They call it activism through algorithms.
The startup originated at the Harvard School of Engineering and Applied Sciences (SEAS), where co-founders Peter Bull and Isaac Slavitt were classmates in the computational science and engineering master’s program. Students in the program are asked to apply the skills they learn to solve a problem using real data. Bull and Slavitt realized that most of the readily available data-crunching opportunities had to do with big commercial enterprises.
A third co-founder, Greg Lipstein, who was Bull’s college roommate and will earn an MBA from Harvard Business School in May, brought needed business operations experience to the team.
Although non-profits and government agencies, just like the commercial sector, are collecting more data than ever before, a large data literacy gap has emerged in the social and government sectors. “They’re collecting the data but they don’t know what the data can do for them, what questions to ask of it,” Bull said. “Even if they know what questions to ask, they’re not able to get those questions answered because the shortfall in supply means data scientists are going to be expensive for a very long time. The social sector is going to lag even further behind. A competition seemed like a really good way of connecting these kinds of organizations to that kind of talent, both in terms of translating what the nonprofits need into something the data scientists would understand and giving them real solutions that they can use.”
DrivenData’s first competition attracted nearly 300 participants, including many of the top people in the field.
Big picture, one of DrivenData’s core goals is to build a pipeline of socially-minded data scientists to solve the big-picture data literacy and data capacity problems in the social and public sectors.
Leave a Reply