You are here

Booth Talk Schedule

 TOP  News Release  Gordon Bell Award  TSUBAME Ranking  Technical Papers  Booth Talk Schedule  Booth Posters  Booth Map

Tuesday, Nov. 15
11:00 - 11:30 Ultra-fast computing pipeline for metagenome analysis on TSUBAME 2.0
  Takashi Ishida - Tokyo Institute of Technology
   
11:30 - 12:00 Molecular Dynamics Simulation of a Biomolecule with High Speed, Low Power Using GPU-Accelerated TSUBAME2.0 Supercomputer
  Masakazu Sekijima - Tokyo Institute of Technology
   
15:00 - 15:30 TSUBAME2.0 & GSIC Efforts for Building HPCI
  Shinichiro Takizawa - Tokyo Institute of Technology
   
1530 - 16:00 Power management and operation of TSUBAME2.0 green supercomputer
  Toshio Endo - Tokyo Institute of Technology
   
16:30 - 17:00 Petaflops scale turbulence simulation on TSUBAME 2.0
  Rio Yokota - King Abdullah University of Science and Technology
   
Wednesday, Nov. 16
11:00 - 11:30 2-PFLOPS Performance of Dendritic Solidification Simulation on the TSUBAME 2.0 Supercomputer
  Takashi Shimokawabe - Tokyo Institute of Technology
   
11:30 - 12:00 How to Run Your CUDA Program Anywhere
  Wu-chun Feng - Virginia Tech
   
13:30 - 14:00 Physis: An Implicitly Parallel Programming Model for Stencil Computations on Large-Scale GPU-Accelerated Supercomputers
  Naoya Maruyama - Tokyo Institute of Technology
   
14:00 - 14:30 Future Exascale HPC through ParalleX Execution Model
  Thomas Sterling - Indiana University
   
14:30 - 15:00 A Holistic approach for Exascale resilience
  Franck Cappello - INRIA-Illinois Joint Laboratory on PetaScale Computing
   
15:00 - 15:30 TSUBAME2.0 -- A Year Later, onto Exascale
  Satoshi Matsuoka - Tokyo Institute of Technology
   
15:30 - 16:00 Graph500 Challenge on TSUBAME 2.0
  Toyotaro Suzumura - Tokyo Institute of Technology / IBM Research - Tokyo
   
16:00 - 16:30 Large-scale CFD applications on GPU-rich supercomputer TSUBAME2.0
  Takayuki Aoki - Tokyo Institute of Technology
   
Thursday, Nov. 17
11:00 - 11:30 Petaflop Biofluidics on the Tsubame 2.0 Supercomputer
  Simone Melchionna - National Research Council Italy
   
12:30 - 13:00 Petascale Data-Intensive Supercomputing Computing on TSUBAME2.0
  Hitoshi Sato - Tokyo Institute of Technology

 

Speaker

PageTop

Title Ultra-fast computing pipeline for metagenome analysis on TSUBAME 2.0
Abstract Metagenome analysis is the study of the genomes of uncultured microbes obtained directly from microbial communities in their natural habitats.
In metagenome analysis, sensitive sequence homology search processes are required because current databases do not include sequence data for most of microbes in the sample. This process needs large computation time and is thus a bottleneck in current metagenome analysis.
We developed a fully automated pipeline for metagenome analysis that can deal with huge data obtained from a next generation sequencer in realistic time by using the large computing power of TSUBAME 2.0 supercomputer. In this pipeline, two different sequence homology search tools can be selected; 1) BLASTX, standard sequence homology search software used in many metagenomic researches. 2) GHOSTM, GPU-based fast sequence homology search software.
GHOSTM is our original sequence homology search program. The program is implemented by using NVIDIA's CUDA and able to search homologues in a short time by using GPU-computing techniques. The program have much higher search sensitivity than BLAT, a well-known fast homology search program, and it is enough for metagenome analysis.
We performed a large-scale metagenome analysis by using our pipeline for 71 million DNA reads data sampled from polluted soils and obtained by using a next-generation sequencer. As results, the pipeline shows almost linear speedup to the number of computing cores. When we use BLASTX as a homology search program, the pipeline achieves to process about 24 million reads per an hour with 16,008 CPU cores (1,334 computing nodes). When we use GHOSTM as a homology search program, the pipeline achieves to process about 60 million reads per an hour with 2,520 GPUs (840 computing nodes). These results indicate the pipeline can process genome information obtained from a single run of next generation sequencers in a few hours. We believe our pipeline will accelerate metagenome analysis with next generation sequencers.
Speaker Takashi Ishida
Affiliation Department of Computer Science, Graduate School of Information Science and Engineering, Tokyo Institute of Technology
Biography 2009/10 - Assistant professor,
Department of Computer Science,
Graduate School of Information Science and Engineering,
Tokyo Institute of Technology

2006/4 - 2009/9 Researcher,
Human Genome Center,
Institute of Medical Science,
University of Tokyo

 

PageTop

Title Molecular Dynamics Simulation of a Biomolecule with High Speed, Low Power Using GPU-Accelerated TSUBAME2.0 Supercomputer
Abstract We will describes the usability of GPUaccelerated molecular dynamics (MD) simulations for studying biomolecules from the viewpoints of speed and power consumption.
The results of simulations showed that GPUs were considerably faster and more energy-efficient than CPUs.
These results are encouraging enough for us to use GPUs for MD simulations.
Speaker Masakazu Sekijima
Affiliation Global Scientific Information and Computing Center, Tokyo Institute of Technology
Biography Masakazu Sekijima received Ph.D. from the University of Tokyo in 2002. Since 2002 he had worked at the National Institute of Advanced Industrial Science and Technology (AIST) as a Research Staff, since 2003 as a Research Scientist, and since 2008 as a Planning Officer. Since 2006 to 2010 he had also worked as a visiting associate professor at the Waseda University. Since 2009 he has worked for Tokyo Institute of Technology as an Associate Professor. His current research interests are High Performance Computing, Bioinformatics and Protein Science. He is a member of IPSJ, JSBI, IEEE, ACM, Protein Society and Biophysical Society.

 

PageTop

Title TSUBAME2.0 & GSIC Efforts for Building HPCI
Abstract HPCI (High Performance Computing Infrastructure) is a federated supercomputer environment where principal supercomputers in Japan join to help researchers to efficiently use peta-scale systems, including the K Computer and TSUBAME. It will be operated from next year, and will provide 13PFlops computational power, multi-petabytes global shared storages and a cloud hosting service. Tokyo Tech's proposal, RENKEI-PoP, which is an appliance for e-Science resource federation and used in RENKEI project, contributes to its design and software, mainly for the storage architecture and the cloud hosting. We will introduce the RENKEI-PoP and the current status of HPCI environement.
Speaker Shinichiro Takizawa
Affiliation GSIC, Tokyo Institute of Technology
Biography Shinichiro Takizawa received his Ph. D. from Tokyo Institute of Technology in 2009. He has been working for Global Scientific Information and Computing Center of Tokyo Institute of Technology as a researcher (2009-2010) and as an assistant professor (2010-). His research interests are high performance network and distributed computing environment such as grid and cloud computing.

 

PageTop

Title Power management and operation of TSUBAME2.0 green supercomputer
Abstract We report the operation of Tokyo-Tech TSUBAME2.0 supercomputer dealing with the
power crisis caused by the poweful earthquake on March 11, 2011.
While saving energy consumtion is and will be the most important issue in design and operation of supercomputers, capping 'peak power consumption' also becomes essential in the power crisis.
We report measures taken on operation of TSUBAME2.0 in this summer within the limitation on time and resources, and issues to be solved.
Speaker Toshio Endo
Affiliation GSIC, Tokyo Institute of Technology
Biography Toshio Endo received his Ph.D. degree in science from the University
of Tokyo in 2001, and is an associate professor at Global Scientific
Information and Computing Center, Tokyo Institute of Technology.
His research interests are high performance computing on petascale
supercomputers, low power computing, GPGPU, parallel algorithms on
heterogeneous systems.

 

PageTop

Title Petaflops scale turbulence simulation on TSUBAME 2.0
Abstract We perform the calculation of an isotropic turbulence on the TSUBAME 2.0 system. For this simulation, we compare an FFT based spectral method and an FMM based vortex method under the same conditions in an attempt to quantify the relative performance between FFT and FMM on large GPU systems. The current FMM achieved a sustained performance of 1.0 PFlops on 4096 GPUs. Weak scaling results showed 74 % parallel efficiency at 4096 GPUs for the FMM, and 14 % parallel efficiency for the FFT on 4096 CPUs. The number of particles/mesh points in the weak scaling test reached 64 billion for 4096 processes, which took approximately 100 seconds. The present FMM uses a hybrid approach that optimizes between, cell-cell, cell-particle, and particle-particle interactions, and achieves high performance on GPUs. The FMM is also extended to handle periodic boundary conditions by considering the effect of periodic images using multipoles. The MPI communication is overlapped with the GPU computation of the local cells and particles.
Speaker Rio Yokota
Affiliation Center for Extreme Computing, King Abdullah University of Science and Technology
Biography 2009 PhD Mechanical Engineering, Keio University, Japan
2009.02 - 2010.08 Dept. Mathematics, University of Bristol
2010.09 - 2011.08 Dept. Mechanical Engineering, Boston University
2011.09 - Center for Extreme Computing, KAUST

 

PageTop

Title 2-PFLOPS Performance of Dendritic Solidification Simulation on the TSUBAME 2.0 Supercomputer
Abstract The mechanical properties of metal materials largely depend on their intrinsic internal microstructures. To develop engineering materials with the expected properties, predicting patterns in solidified metals would be indispensable. The phase-field simulation is the most powerful method known to simulate the micro-scale dendritic growth during solidification in a binary alloy. To evaluate the realistic description of solidification, however, phase-field simulation requires computing a large number of complex nonlinear terms over a fine-grained grid. Due to such heavy computational demand, previous work on simulating three-dimensional solidification with phase-field methods was successful only in describing simple shapes. Our new simulation techniques achieved scales unprecedentedly large, sufficient for handling complex dendritic structures required in material science. Our simulations on the GPU-rich TSUBAME 2.0 supercomputer at the Tokyo Institute of Technology have demonstrated good weak scaling and achieved 2.000 PFlops in single precision for our largest configuration, using 4,000 GPUs along with 16,000 CPU cores, which is the first peta-scale result as a real stencil application we know to date.
Speaker Takashi Shimokawabe
Affiliation Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology
Biography Takashi Shimokawabe is a Ph.D. student of the Graduate School at Tokyo Institute of Technology. He received a M.Sci. in Physics from Tokyo Institute of Technology in 2007. His research interests are general-purpose computing on graphics processing units (GPGPU), computational fluid dynamics, stencil applications,
and high performance computing. He was an SC10 Best Student Paper finalist and his paper has been accepted as a Gordon Bell Prize finalist at SC11.

 

PageTop

Title How to Run Your CUDA Program Anywhere
Abstract While many may resist the proprietary nature of CUDA and its need to run on NVIDIA GPUs, we present a tool that "enables your CUDA program to run anywhere." The tool, which we call CU2CL, short for a CUDA-to-OpenCL source-to-source translator, makes clever reuse of the Clang compiler framework to automatically translate CUDA source code into OpenCL source code. Currently, the CU2CL translator covers the primary constructs found in CUDA runtime API, and it has successfully translated many applications from the CUDA SDK and Rodinia benchmark suite. The performance of the automatically translated applications via CU2CL is on par with their manually ported counterparts.
Speaker Wu-chun Feng
Affiliation 1. Associate Professor; Dept. of Computer Science, Dept. of Electrical & Computer Engineering, and Virginia Bioinformatics Institute, Virginia Tech; Blacksburg, VA
2. Site Co-Director; NSF Center for High-Performance Reconfigurable Computing at Virginia Tech; Blacksburg, VA
Biography Dr. Feng is currently an Associate Professor of Computer Science and Electrical & Computer Engineering at Virginia Tech (VT), where he directs the Synergy Lab and serves as a VT site co-director for the National Science Foundation Center for High-Performance Reconfigurable Computing (CHREC). In addition, he is an adjunct faculty member at the Virginia Bioinformatics Institute at Virginia Tech and in the Dept. of Cancer Biology and Translational Science Institute at the Wake Forest University School of Medicine. Feng came to VT from Los Alamos National Laboratory, following previous professional stints at Ohio State University, Purdue University, University of Illinois at Urbana-Champaign, EnergyWare, Orion Multisystems, Vosaic, IBM, T.J. Watson Research Center, and NASA Ames Research Center. Dr. Feng has published 200+ peer-reviewed technical publications in high-performance networking and computing, high-speed systems monitoring and measurement, low power and power-aware computing, computer science pedagogy for K-12, and bioinformatics.

Regarded as a visionary in green supercomputing, Feng first introduced the idea of "energy-efficient supercomputing" to the high-performance computing (HPC) community at SC 2001 and delivered Green Destiny, a 240-node cluster supercomputer in five square feet that consumed a mere 3.2 kilowatts of power (when booted diskless) in 2002. This cluster ultimately produced a Linpack rating of 101 Gflops, which would have placed it in the TOP500 List at the time. As a consequence, this green supercomputer achieved a level of distinction that led to international news coverage by the New York Times, CNN, the International Herald Tribune, PC World, Slashdot, and BBC News. Feng worked to establish a low-power supercomputing company, Orion Multisystems, and has been actively involved in architecting power-aware software that reduces energy consumption while maintaining performance as part of another start-up company called EnergyWare. He is also credited with developing the concept of The Green500 List in 2006, which officially made its debut during SC '07. Of more recent note is his work in high-performance GPU computing, both "in the small" (i.e., embedded devices) and "in the large" (i.e., the GPU-accelerated HokieSpeed supercomputer). His research in this space has delivered personal desktop supercomputing solutions to neuroscience, earthquake modeling, and bioinformatics.

Dr. Feng holds a Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign, a M.S. in Computer Engineering and B.S. degrees in Electrical & Computer Engineering and Music from Penn State University. In addition to being a Distinguished Scientist of the ACM and Senior Member of the IEEE Computer Society, Dr. Feng has also been named to HPCwire's Top People to Watch twice, once in 2004 and again in 2011.

 

PageTop

Title Physis: An Implicitly Parallel Programming Model for Stencil Computations on Large-Scale GPU-Accelerated Supercomputers
Abstract This paper proposes a compiler-based programming framework that automatically translates user-written structured grid code into scalable parallel implementation code for GPU-equipped clusters. To enable such automatic translations, we design a small set of declarative constructs that allow the user to express stencil computations in a portable and implicitly parallel manner. Our framework translates the user-written code into actual implementation code in CUDA for GPU acceleration and MPI for node-level parallelization with automatic optimizations such as computation and communication overlapping. We demonstrate the feasibility of such automatic translations by implementing several structured grid applications in our framework. Experimental results on the TSUBAME2.0 GPU-based supercomputer show that the performance is comparable as hand-written code and good strong and weak scalability up to 256 GPUs.
Speaker Naoya Maruyama
Affiliation Tokyo Institute of Technology
Biography Naoya Maruyama received his Ph.D. degree in Computer Science from Tokyo Institute of Technology in 2008, and is an Assistant Professor at Global Scientific Information and Computing Center, Tokyo Institute of Technology. He has been working on research topics related to large-scale high performance computing, including fault tolerance, low power computing, and programming models for heterogeneous systems.

 

PageTop

Title Future Exascale HPC through ParalleX Execution Model
Abstract While the world’s largest systems such as Tsubme-2 continue to achieve unprecedented capability in computing performance and data handling through expanded multicore and GPU structures, methods for using such systems and applying them to the extreme challenges of science, technology, and societal needs are failing to achieve user programmability or problem generality with as many as three layers of programming interfaces required to fully employ all levels of parallelism comprising modern heterogeneous systems. Extending the scale of system hardware by 2 to 3 orders of magnitude even as processor core performance remains approximately constant will only aggravate these challenges. This presentation will discuss an alternative methodology based on an emerging dynamic adaptive execution model, ParalleX, and recent research results derived through experiments with a proof-of-concept runtime system. A set of semantic constructs will be discussed to replace message-passing techniques and dramatically improve parallelism and scalability while enhancing node efficiency.
Speaker Thomas Sterling
Affiliation Center for Research in Extreme Scale Computing, Indiana University
Biography Dr. Thomas Sterling holds the position of Professor of Informatics and Computing at the Indiana University (IU) School of Informatics and Computing as well as serving as Associate Director of the PTI Center for Research in Extreme Scale Technologies (CREST). He also is an Adjunct Professor at the Louisiana State University (LSU) Center for Computation and Technology (CCT) and CSRI Fellow at Sandia National Laboratories. Since receiving his Ph.D from MIT in 1984 as a Hertz Fellow he has engaged in applied research in related fields associated with parallel computing system structures, semantics, and operation in industry, government labs, and academia. Dr. Sterling is best known as the "father of Beowulf" for his pioneering research in commodity/Linux cluster computing. He was awarded the Gordon Bell Prize in 1997 with his collaborators for this work. He was the PI of the HTMT Project sponsored by NSF, DARPA, NSA, and NASA to explore advanced technologies and their implication for high-end system architectures. This three-year project involved a dozen institutions and 50 researchers to investigate superconducting logic, holographic storage, optical networks, and Processor-In-Memory components. Other research projects included the DIVA PIM architecture project with USC-ISI, the Cray Cascade Petaflops architecture project sponsored by the DARPA HPCS Program, and the Gilgamesh high-density computing project at NASA JPL. Thomas Sterling is currently engaged in research associated with the ParalleX advanced execution model for extreme scale computing. This work is to devise a new model of computation establishing the foundation principles to guide the co-design for the development of future generation Exascale computing systems by the end of this decade. This research is conducted through several projects sponsored separately by DOE, NSF, DARPA, Army Core of Engineers, and NASA. Dr. Sterling is the co-author of six books and holds six patents.

 

PageTop

Title A Holistic approach for Exascale resilience
Abstract Resilience is one of the major issue to address for Exascale
computing. The INRIA-Illinois joint laboratory on Petascale Computing
is following a holistic approach to address this challenges,
investigating and leveraging applications and system properties to
design scalable resilience techniques. In this talk I will focus on 3
techniques:
-Reducing the checkpoint overhead to 8% by using dedicated cores and
local storage (a joint work with TiTech)
-Reducing the overhead of fault tolerant protocols to ~1% with an
hybrid fault tolerant protocol providing fault confinement and
avoiding global restart
-Predicting failures with a ~90% precision (~50% recall) and a ~1
minute (mean) ahead notification, i.e. giving enough time to complete
proactive actions.
These three techniques are among the set of mechanisms that we think
need to be combined together to from a resilience approach for
Exascale reducing the fault tolerance overhead to less than 10%.
Speaker Franck Cappello
Affiliation INRIA-Illinois Joint Laboratory on PetaScale Computing
Biography Franck Cappello holds a Senior Researcher position at INRIA and a
visiting research professor position at University of Illinois at
Urbana Champaign. He is the co-director with Prof. Marc Snir of the
INRIA-Illinois Joint-Laboratory on PetaScale Computing
(http://jointlab.ncsa.illinois.edu/) developing joint software
research in the context of the BlueWaters project
(http://www.ncsa.illinois.edu/BlueWaters/). He is member of the
executive committee of IESP (International Exascale Software Project:
http://www.exascale.org) and chair of the "system software ecosystem"
for EESI (European Exascale Software Initiative:
http://www.eesi-project.eu/). He is editorial board member of the
international Journal on Grid Computing, Journal of Grid and Utility
Computing and Journal of Cluster Computing. He is a steering committee
member of IEEE HiPC and IEEE/ACM CCGRID, Technical paper co-chair of
SC2011 and was the Program chair of HiPC 2010, IEEE NCA 2010, Program
co-Chair of IEEE CCGRID'2009 and General Chair of IEEE HPDC'2006.

 

PageTop

Title TSUBAME2.0 -- A Year Later, onto Exascale
Abstract Tsubame2.0 came into being in Nov. 1, 2011, and has been running in full production since then with very little interruption. Among the challenges had been attaining stability in the machine, extracting maximum performance out of thousands of GPUs, and devising a scheduling model for 2,000 users of mixed variety. One of the biggest challenges had been to deal with substantial electricity shortage after the Fukushima disaster, resulting in a national mandate for peak power conservation which Tsubame2 met graciously without substantially sacrificing user experience with the machine. The accolade with TSUBAME2.0 has been the bulk of application and system research results that have been achieved, including the two Gordon Bell prize finalist for SC11. Such experiences are valuable stepping stones as we strive to achieve exascale in the coming years.
Speaker Satoshi Matsuoka
Affiliation Global Scientific Information and Computing Center, Tokyo Institute of Technology
Biography Satoshi Matsuoka is a full Professor at the Tokyo Institute of Technology leading the TSUBAME supercomputer series, which became the 4th fastest in the world and awarded the Green500 "Greenest Production Supercomputer in the World" in November 2010 and June 2011. He has co-lead the Japanese national grid project NAREGI during 2003-2007, and is currently leading various projects including the JST-CREST Ultra Low Power HPC. He has chaired many ACM/IEEE conferences, including the SC09 Technical Papers Chair, SC11 Community Chair, and Program Chair planned for SC13. His awards include the JSPS Prize in 2006, awarded by his Highness Prince Akishinomiya.

 

PageTop

Title Graph500 Challenge on TSUBAME 2.0
Abstract Graph500 is a new benchmark that ranks supercomputers by executing a large-scale graph search problem. Our early study reveals that the provided reference implementations are not scalable in a large-scale distributed environment. In this talk we introduce our optimized method based on 2D based partitioning and other various optimization methods such as communication compression and vertex sorting. Our optimized implementation can solve BFS (Breadth First Search) of large-scale graph with 236 (68.7 billion) vertices and 240 (1.1 trillion) edges for 16.97 seconds with 512 nodes and 12288 CPU cores.
This record corresponds to 64.8 GE/s, which is the top-ranked score as of this writing. We also demonstrate thorough study of performance characteristics of our optimized implementation and reference implementations in a large-scale distributed memory supercomputer with the Fat-Tree based Infiniband network.
Speaker Toyotaro Suzumura
Affiliation Tokyo Institute of Technology / IBM Research - Tokyo
Biography Toyotaro Suzumura received his Ph. D. from Tokyo Institute of Technology in 2004. He has been working for IBM Research - Tokyo as a research staff member. He also became a visiting associate professor at the department of computer science of Tokyo Institute of Technology in April 2009. His research interest is mostly on distributed high performance processing such as stream computing, cloud computing, GPGPU, and large-scale graph processing.

 

PageTop

Title Large-scale CFD applications on GPU-rich supercomputer TSUBAME2.0
Abstract Most stencil applications of Computational Fluid Dynamics (CFD) are memory-bound problems. GPU has high performances in both computation and memory bandwidth suitable for them. I would like to show the simulation results of gas-liquid two-phase flow, some turbulent flows of Lattice Boltzmann method and compressible flows with high-order numerical scheme carried out on the TSUBAME 2.0 supercomputer, which is equipped with 4224 NVIDIA Tesla M2050 GPUs.
Speaker Takayuki Aoki
Affiliation GSIC, Tokyo Institute of Technology
Biography Takayuki Aoki received a BSc in Applied Physics (1983), an MSc in Energy Science and Dr.Sci (1989) from Tokyo Institute of Technology, was a Visiting Fellow in Cornell University and the Max-Planck Institute in Germany for one year, has been a professor in Tokyo Institute of Technology since 2001 and the deputy director of the Global Scientific Information and Computing Center since 2009. He has received the Computational Mechanics Achievement Award from Japan Society of Mechanical Engineers and many awards and honors in visualization, and others. He has authored the first book in the Japanese language on the CUDA programming and applications.

 

PageTop

Title Petaflop Biofluidics on the Tsubame 2.0 Supercomputer
Abstract We present a computational framework for multi-scale simulations of real-life biofluidic problems and applied to the simulation of blood flow through the human coronary arteries with a spatial resolution comparable with the size of red blood cells, and physiological levels of hematocrit. The simulation on Tsubame 2.0 exhibits excellent scalability up to 4000 GPUs and achieves close to 1 Petaflop aggregate performance, which demonstrates the capability to predicting the evolution of biofluidic phenomena of clinical significance. The combination of novel mathematical models, computational algorithms, hardware technology and optimization will be discussed together with an application employed to assess the vulnerability of the coronary network to atherosclerotic plaque build-up to assist clinical decision.
Speaker Simone Melchionna
Affiliation National Research Council Italy
Biography Simone Melchionna is a researcher at the National Research Council's Institute for Physico-Chemical Processes. His research interests cover complex and biological systems, confined fluids, proteins and DNA,
investigated via computational methods, such as Molecular Dynamics, Monte Carlo, Lattice Boltzmann, Density Functional Theory. He is involved in developing multiscale methods, inspired from kinetic and microscopic theories of liquids. He received a PhD in chemistry from the University of Rome La Sapienza.

 

PageTop

Title Petascale Data-Intensive Supercomputing Computing on TSUBAME2.0
Abstract -
Speaker Hitoshi Sato
Affiliation GSIC, Tokyo Institute of Technology
Biography -

PageTop