User Research

The Computational Research and CyberInfrastructure Support Initiative (CCR-CIS) under the Office of Research supports the research of many researchers, including faculty, postdocs, and students. This support includes joint scientific and engineering research collaborations, HPC computational resources at USC, middleware support for cluster and large scale computing, instructional tutorials, access to TeraGrid/XSEDE national supercomputer sites through the NSF Campus Champions program, software and systems support, and Desktop to TeraGrid HPC tools. CCR-CIS participants include researchers both within the University of South Carolina system and neighboring institutions. Here we briefly describe several research projects conducted by the USC researchers.

 

 

 

Dynamic 3D modeling of biofilms using GPU computing

Prof. Qi Wang (Mathematics, University of South Carolina)

We study complex phenomena in nanoscience research through an integrated program of theory, modeling, and computational simulations. The current research activities in the group include: quantum chemistry, algorithm development and analysis, pattern recognition and visualization, Monte Carlo, molecular dynamics, Ab initio simulation of complex material systems, multiscale modeling and simulation of molecular structures, self-assembly phenomena, mesoscale structure in complex fluids/soft matter, high performance computing, and GPU computing. We seek interdisciplinary interaction with other researchers within the USC and outside groups. [read more]

 

 

 

Nanomaterials and Catalysis, Multi-Scale Modeling

Prof. Andreas Heyden (Chemical Engineering, University of South Carolina)

To enable simulations of complex systems that accurately reflect experimental observations, continued advances in modeling potential energy surfaces and statistical mechanical sampling are necessary. While studying systems relevant for catalysis, we develop new theoretical and computational tools for the investigation of these complex chemical systems. Our tool development efforts are at the interface between engineering, chemistry, and physics, and are rooted in classical, statistical, and quantum mechanics with a special focus on novel multiscale methods. [read more]

 

 

 

Quantum chemistry, electron correlation effects in molecules

Vitaly Rassolov (Chemistry and Biochemistry, University of South Carolina)

It is well known to theorists that most chemical reactions require quantum description of nuclei due to the effects of tunneling, zero point energy, and non-adiabatic phenomena. It is appealing to describe quantum effects using semiclassical methods. The problem is that modern semiclassical methods are often more expensive than full quantum treatment, and their error is difficult to assess. My group develops a method based on Bohmian trajectories that is very inexpensive at the semiclassical limit, favorably compares to other semiclassical methods (such as Herman-Kluk), and is systematically improvable. [read more]

 

 

 

Development of approximate quantum potential method applicable to large molecular systems and studies of reactivity of hyperthermal oxygen

Sophya Garashchuk (Chemistry and Biochemistry, University of South Carolina)

For general problems, the exact determination of the quantum potential is at least as difficult as the solution of the standard Schrodinger equation, but the quantum trajectory formulation provides a convenient starting point for approximation of the "quantum'' quantities which are small in the semiclassical limit of heavy particles such as nuclei. We develop global approximations to the quantum potential, which capture dominant quantum effects, such as zero-point energy, tunneling, wavepacket bifurcation, in a computationally efficient manner (currently tested for up to 40 degrees of freedom). [read more]

 

 

 

Spin photovoltaic effects in quantum wires

Yuriy Pershin (Physics and Astronomy, University of South Carolina)

We consider the current induced in quantum wire by external electromagnetic radiation. The photocurrent is caused by the interplay of spin-orbit interaction and external in plane magnetic field. We calculate this current using a Wigner function approach taking into account radiation-induced transitions between transverse subbands. [read more]

 

 

Electron Microscopy and Nano Imaging Reconstruction

Douglas Blom (Electron Microscopy Center, University of South Carolina)

A systematic approach is being developed to extract high resolution information from HAADF-STEM images which will be beneficial to the characterization of beam sensitive materials. The idea is to treat several, possibly many, low electron dose images with specially adapted digital image processing concepts applied to the raw STEM data at minimum allowable spatial resolution. We are refining the main conceptual imaging concepts and restoration methods that are suitable for carrying out such a program and, in particular, allow one to correct special acquisition artifacts. This project is in a collaboration with members of the Interdisciplinary Mathematics Institute here at the University of South Carolina. [read more]

 

2012 Pilot Training Program: Middleware for Research

EPSCoR RII Track I & II assistantship support was provided for 13 graduate and 5 undergraduate students in the Middleware Training Program for Spring/Summer 2012. Student participants in the program were nominated by selected faculty mentors drawn from a wide variety of disciplines, ranging from tissue fabrication, geonomics, fuel cells, computational biology, electrical engineering, analysis of sensor data, and digital humanities. The faculty researchers were identified based both upon the quality of their research and as being strong candidates for HPC enhancement by middleware support.

 

 

 

Thermo-mechanical Modeling of SiC/SiC Nuclear Fuel Cladding Test Assembly

Luis Alva-Solari (Mechanical Engineering, University of South Carolina) mentored by Prof. Xinyu Huang

The objective of this modeling study is to simulate temperature distribution and mechanical stress/strain state in a SiC/SiC composite cladding under simulated thermo-mechanical loading. A Zirconia tube will be inserted in the SiC/SiC tube with a predetermined radial interference between them and an intense heat source, placed inside the Zirconia tube (inner tube), will heat the inner tube up to a target temperature. The hot and expanding Zirconia tube will subsequently heat up and pressure the SiC/SiC composite tube. The set of tubes will be isolated from the surrounding air by thermal insulation. (more)

As the heat source is not in contact with the inner tube, it is considered that the heat will be transmitted by radiation mainly. As the inner tube is in contact with the SiC/SiC tube, heat will be transferred to the SiC/SiC tube by conduction.  Because of the difference in physical properties between the Zirconia and the SiC, due to the different thermal expansion coefficient, and because of the temperature gradient (hotter inside), mechanical interference stresses will be generated at the interface between the SiC/SiC and the Zirconia tube. Radial and hoop stresses will be induced as well. The induced stresses will cause the SiC/SiC to crack and fracture eventually.  Due to the coupled nature thermal and mechanical response and the complicated geometry involved, it is challenging to obtain an analytical solution of the problem. Hence, COMSOL Multiphysics software will be utilized to simulate the thermal mechanical response of this test assembly. The model will help us optimize experimental design and conduct failure analysis. To facilitate rapid execution of large 2D or 3D problems, we will attempt to use CR-CISI resources like parallel programming to speed up the execution of the COMSOL.    See Mentor's website for more details.    (less)

 

 

  

 

 

Electron Microscopy Image De-Noising using Fourier Analysis (Undergraduate Project)

Charles Arnold (Mathematics, University of South Carolina) mentored by Prof. Benjamin Berkels

Electron microscopy requires a high density of electrons per square nanometer in order to have images with high resolution and an acceptable signal-to-noise ratio. The environment in which an electron microscope is used must be very stable. Ambient vibrations, temperature fluctuations, acoustic waves, and electromagnetic waves can all add unwanted information, or noise, into the electron microscope readings. When analyzing soft materials one must use a smaller dose of electrons per square nanometer, for the standard dose can damage or destroy the material. However, this can cause the readings to have low resolution and an unacceptable signal-to-noise ratio. (more)

This project involved developing a matlab program that analyzed a reading from an electron microscope with low resolution and an unacceptable signal-to-noise ratio. The objective is to remove unwanted information from the reading yielding an acceptable signal-to-noise ratio and improving its resolution. The matlab program first read an image from an electron microscope with low resolution and high signal-to-noise ratio. By using the fast Fourier transform, the program decomposed the image into its different frequency components. My goal was then to remove noise from the image. This was accomplished by setting each Fourier coefficient that was very small in magnitude to zero. The program then used the inverse fast Fourier transform to return a new image. The new image had an acceptable signal-to-noise ratio and an improved resolution. See Mentor's website for more details. (less)

 

 

 

Enhancing Computational Science Curriculum to Include Key Concepts of Parallelism  (Undergraduate Project)

Trent Edwards and James Sweeney (Mathematics and Computer Science, Coker College) mentored by Prof. Paul Dostert

The Coker College research team focused their efforts on implementing parallel structure and high performance computing into the computer science and mathematics curriculum at not only their own school but any school that is looking to introduce a more updated computer science and mathematics program. These courses offer all beginning computer science majors with the basic essentials necessary to understand and utilize parallel computing in their programs throughout the duration of their career. Parallel computing is essential in this age, with large data sets and a hefty weight put on speed. (more)

This new and improved set of courses will prepare all graduating computer science majors and minors with a background necessary to function in the job market as it expands to incorporate these high performance ideals. Along with the reassessed courses, the program will incorporate a parallel programming knowledge test. This test will be administered at the beginning of a student’s education, the midpoint of his/her education, and prior to graduation. This test will ensure that the program is living up to the demands of the market and is properly preparing students for the future. With this program the Coker College research time is ensuring that graduates are both knowledgeable and competent with parallel computing and high performance in general. See Mentor's website for more details.  (less)

 

 

 

Effects of polymer structure and composition on fully resorbable endovascular scaffold performance

Jahid Ferdous (Biomedical Engineering, University of South Carolina) mentored by Prof. Tarek Shazly

Fully-erodible endovascular scaffolds are increasingly considered for the treatment of ischemic artery disease owing to their potential to mitigate long-term risks associated with permanent devices. While complete scaffold erosion facilitates vessel healing, the generation and release of material degradation by-products may elicit a local inflammatory response that limits implant efficacy. Poly-L-lactide (PLLA) is candidate material for a variety of erodible implants due to generally acceptable biocompatibility and the tunable physical, chemical, and mechanical properties. (more)

In this study, we will develop a two-dimensional computational model using a commercially available finite-element based software COMSOL MultiphysicsTM to quantify how the compositional and structural parameters of PLLA-based fully-erodible endovascular scaffolds affect degradation kinetics, erosion kinetics, and the transient accumulation of material by-products within the arterial wall. Computational modeling provides an efficient means to evaluate design strategies to mitigate the potential of by-product accumulation within local tissue throughout the lifetime of fully-erodible endovascular scaffolds. See Mentor's website for more details.  (less)

 

 

 

GPU acceleration of pyrosequencing noise removal

Yang Gao (Computer Science and Engineering, University of South Carolina) mentored by Prof. Jason Bakos

AmpliconNoise , an updated version of Pyronoise , is a tool for removing noise from metagenomic data recorded by a 454 pyrosequencer. AmpliconNoise has shown to be effective in reducing overestimation of operational taxonomic units (OTUs) and chimera detection. AmpliconNoise’s noise removal method relies on clustering a large set of short sequences read by the sequencer. The DNA sequencing algorithm requires the computation of O(n2) pairwise distances using a global sequence alignment method. Each sequence consists of a few hundred base pairs and a typical dataset contains 104 sequences, making the clustering computation extremely expensive. (more)

In this paper we describe of GPU kernel implementation of the most computationally expensive module in the AmpliconNoise software package, SeqDist. With our GPU workstation (Intel Core i7 980 @ 3.33GHz + 3 x NVIDIA Tesla C2070) and a typical 454 dataset, our implementation achieves a 8.6X (CUDA-SeqDist) speedup with a single GPU when compared with a 12 MPI ranks of the original tools running on the CPU alone. With three GPUs, we achieve a 2.1X further speedup over the single GPU version, yielding a total speedup of 18.3X. We measure the throughput of our kernel to be 1.4 giga floating-point cell updates per second (GFCUPS) with a single GPU and 2.9 GFCUPS with 3 GPUs, where GFCUPS refers to the unique method by which the score matrix must be updated in the specialized alignment algorithm used in AmpliconNoise. The most interesting part of our research is the competition between the MPI equipped cluster and the GPU.  We found that one T10 GPU is comparable to 64 ranks of Xeon Extreme Edition processors, while four T10 GPU cards outperforms the full capacity of the cluster with 128 ranks. A single C2070 GPU achieves nearly 1.5 GFCUPS, which is equivalent to about a 90 node MPI cluster. With three C2070 GPUs we achieved a further speedup of 1.9. See Mentor's website for more details.  (less)

 

 

  

 

 

Agent-based Simulations of Casualties Incurred by Invasion of Japan

Michael Helms (Center for Digital Humanities, University of South Carolina) mentored by Profs. John Bonnet (Brock Univ.) and David Miller (USC)

The atomic bombings of Hiroshima and Nagasaki remain one of the most contested areas of debate in the literatures devoted to the conclusion of World War II and the beginning of the Cold War. One important point of contention is the issue of the cost of the bombings. We know that more than 200,000 people died as a result of the bombings. How many would have died if the bombs had not been used, or if they had not worked and the U.S. and its allies had been forced to invade Japan proper in late 1945 and early 1946? In their post-war defenses of the bombings, members of the Truman administration argued that they had expected anywhere between half a million to two million casualties on the allied side alone. (more)

In the mid-1980s, however, historians began to question that defense, pointing to contemporary documents issued by military planners that suggested allied casualties in dead and wounded would have numbered in the tens of thousands, numbers well below the actual casualties of the bombings and the numbers cited by the Truman administration. Put simply, some historians argued that the Truman administration expected fewer casualties than they claimed, and by extension that fewer people would have died if the bombs had not been used. We now have much better, rigorous methods to determine the potential casualties Allied and Japanese forces, and Japanese civilians, would have incurred if the U.S. had been forced to invade Japan proper, methods supported by High Performance Computing and Agent-Based Simulation. See Mentor's website for more details.  (less)

 

 

 

Using the Lefkovitch Matrix to examine long-term population projections of island populations of the Eastern Diamondback Rattlesnake, Crotalus adamanteus

Mike Martin (Biology, University of South Carolina) mentored by Prof. Tim Mousseau

Demographic information for wildlife populations is important in helping wildlife managers make appropriate management decisions. However, detailed information for populations is often lacking. The Eastern Diamondback Rattlesnake (Crotalus adamanteus; EDB) is currently under review for listing as a federally threatened species, yet studies have not focused on survival with respect to life history for this long-lived species. Pressure from loss of habitat and persecution are thought to be primary drivers of the decline of populations of this species. (more)

For this project, we aim to use survival data gathered from analysis of a 3-year (2009-2012) telemetry and capture-mark-recapture study from an island in South Carolina for use in a Lefkovitch matrix to determine the long-term population trends. Furthermore, we make a dynamic transformation matrix with changing parameter estimates over time to represent the decrease in resources available to the population as a result of the rapid development of areas within the species’ geographic range. We use Matlab to run eigenvalue/eigenvector analysis to determine current population trajectory and sensitivity/elasticity analysis for the survival and growth parameters to determine priority size classes for conservation efforts. Finally, we will incorporate perturbations to elements within the transformation matrix to explore the outlook for this and other EDB populations. See Mentor's website for more details.  (less)

 

 

 

Multislice Frozen Phonon HAADF-STEM Image Simulation of Important Inorganic Materials

Sonali Mitra (Chemistry, University of South Carolina) mentored by Dr. Doug Blom

The Multislice Frozen Phonon High Angle Annular Dark Field (HAADF)-STEM (Scanning Transmission Electron Microscopy) image simulation on various intergrowth and grain boundary of orthorhombic M1-type phase and trigonal Mo3VOx phase of industrially important molybdenum-vanadium bronze based selective oxidation catalysts will be carried out using High-Performance Cluster Computing. The “autostem” code written in C language developed by E.J. Kirkland and freely downloadable from the site is used for Multislice Frozen Phonon simulation. (more)

These simulated images and the experimental HAADF-STEM images are studied at the atomic scale for quantitative comparisons of atomic coordinates and metal site occupancies along with the results derived from X-ray and neutron powder diffraction Rietveld refinements. The aberration-corrected HAADF-STEM imaging has allowed researchers to study images at atomic resolution. The images formed from different atomic columns have contrast variation data roughly proportional to Z2, where Z is the atomic number of atoms along the column. Image simulations lead to a better understanding of the specimen electron interactions and help to improve the image as well as calibrate and assess the electron optics. Multislice frozen phonon HAADF-STEM image simulation studies are conducted on family of oxyfluoride compounds Sr3AlO4F and NaCa2GeO4F, which can be used as novel phosphor material. This study will enable us to understand the relationship between structure and photoluminescent property of these compounds. See Mentor's website for more details.(less)

 

 

 

Building a middleware cybersecurity interface for cloud supercomputing  (Undergraduate Project)

Loodwing Murillo (Methematics and Computer Science, Benedict College) mentored by Prof. Shahadat Kowuser

Cloud Computing has been envisioned as the next generation architecture of IT Enterprise. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, Cloud Computing moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In our research, we focus on cloud data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users’ data in the cloud, we propose an effective and flexible distributed scheme with two salient features, opposing to its predecessors. (more)

By utilizing the homomorphism token with distributed verification of erasure-coded data, our scheme achieves the integration of storage correctness insurance and data error localization, i.e., the identification of misbehaving server(s). Unlike most prior works, the new scheme further supports secure and efficient dynamic operations on data blocks, including: data update, delete and append. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against Byzantine failure, malicious data modification attack, and even server colluding attacks. See Mentor's website for more details.  (less)

 

 

 

Utilizing bioinformatics and geographical tools to correlate spatial and genetic distance in the malaria parasite, Plasmodium falciparum

Chase Nelson (Biology, University of South Carolina) mentored by Prof. Austin L. Hughes

This project studied genetic variation in the C-terminal non-repetitive (CTNR) region of the circumsporozoite protein (CSP) of Plasmodium falciparum, the most virulent human malaria parasite (Hughes 1991). The CTNR region contains T cell epitopes that are enriched in radical nonsynonymous (amino acid-altering) changes, indicative of positive (Darwinian) diversifying selection for immune evasion (Hughes & Hughes 1995). Previous attempts have been made at a geographical analysis, showing that levels of polymorphism differ between different regions of Thailand under differing levels of pest control (Jongwutiwes et al. 2010). Here, a more thorough geographical analysis was undertaken. First, the entire CTNR region was obtained for the 7G8 sequence (Dame et al. 1984). (more)

Homologous sequences were obtained using BLAST. After filtering a chromosome and sequences lacking full coverage of this region, 447 sequences remained for analysis. Each sequence was traced to its geographic origin through a literature search and communication with authors, revealing that most sequences originate from Thailand (335), Gambia (44), Kenya (18), India (11) and Venezuela (10). After this information was obtained, all sequences were aligned using ClustalW, and phylogenetic trees were produced using the Neighbor-Joining and Maximum Likelihood methods. These trees revealed that sequences originating from the same region often do not cluster together, suggesting that genetic distance is in some ways independent of geographic distance, perhaps as a result of natural selection. A z-test was performed to compare genetic diversity (measured as the Jukes-Cantor corrected number of nonsynonymous substitutions per nonsynonymous site; Nei & Gojobori 1986) within Africa (77 sequences) to diversity in the rest of the world (370 sequences). Diversity was greater in Africa, but results were not statistically significant (P = 0.0961). Finally, a preliminary analysis was undertaken using the ArcGIS geographical information systems software to show how it might be used to perform geographic analyses of genetic data. Proportional symbol maps were produced showing genetic distance from selected sequences, and the Moran’s I statistic of spatial autocorrelation was calculated for selected sequences. Future work will extend analyses of polymorphism within and between geographic regions. The ArcGIS analysis will be extended to the full dataset. Ultimately, it is hoped that this research will augment our understanding of the evolution of malaria and to guide control measures for this parasite. See Mentor's website for more details.  (less)

 

 

 

Comparative metagenomic and transciptomic analysis of the microbial community living in hypersaline mats in San Salvador

Eva Preisner (Environmental Health Sciences, University of South Carolina) mentored by Prof. Sean Norman

For this project we conduct comparative metagenomic and transcriptomic analysis to investigate the variety of bacteria (using 16S rRNA gene sequences), and active bacteria (16S rRNA transcript sequences) in microbial mats. The objective is to develop computational procedures that are able to help us perform the corresponding bioinformatic analysis. That analysis must first trim the sequences to remove primers. The next step is to process transcripts data. The Pipeline procedure (i.e. percent recovery of internal control transcripts) aligns all sequences to the internal control sequence to see how many contiqs are formed, and further remove the internal control sequence from the sequence data. (more)

Finally the Read annotation procedure is invoked. Ribosomal RNAs need to be removed using a BLAStn search against, for instance, the small and large subunit SILVA database with a bitscore cutoff greater than 50, and small, non-coding RNAs need to be removed using, for instance, RFam database with BLAStn and a bitscore greater than 40. All remaining reads need to be annotated using BLASTx searches against NCBI RefSeq and Clsuters of Orthologous Genes (COG) database. Our aim is to develop a workflow combining all steps using code components from QIIME, MOTHUR, or MEGAN. See Mentor's website for more details.  (less)

 

 

 

Multiple object tracking with occlusion handling using Average Longest Path framework

Dhaval Salvi (Computer Science and Engineering, University of South Carolina) mentored Prof. Song Wang

We formulate the tracking as a graph problem. Starting from a set of candidate detected boxes on each video frame, we construct initial "tracklets." Tracklets are coherent sub-portions of an entire track, computed using simple heuristic like distance between the detection boxes between consecutive frames etc. We then combine these tracklets in a hierarchical manner to obtain the complete tracks. At each level of the hierarchy we construct a directed graph, where each tracklet is represented by a node, together with two additional nodes: enter node S and exit node T. (more)

We then construct directed edges in this graph with the inter-tracklet similarity defining the weights on these edges. Our goal of tracking is then to find a directed path P from S to T with a minimum average cost. We formulate the cost function as a ratio of weight between the inter-tracklets to the number of detection boxes in each tracklet. We convert the optimal path finding problem to one of finding an optimal cycle C in this graph. By Binary search we can reduce this to a problem of negative-weight cycle detection. Because this is a directed graph, finding negative-weight cycle can be achieved easily and efficiently in polynomial time. See Mentor's website for more details.  (less)

 

 

 

Applications of Compressed Sensing to Electron Tomography

Toby Sanders (Mathematics, University of South Carolina) mentored by Prof. Peter Binev

The analysis of biological and soft materials comes from images formed by electron microscopes. Unfortunately, obtaining a quality image of these materials can require high density sampling which can damage the material under investigation. Therefore, we need the use of special reconstruction techniques such as compressed sensing (CS) for the under sampled set of data that comes from electron tomography (ET). The process of ET involves scanning the material with an electron beam along a grid at several different angles. For each scan we obtain a value which corresponds to the integral of the intersection of the area of the material and the electron beam. This can be mathematically represented as a matrix multiplication with our material x, i.e. Ax = b, where b is a vector containing the values of the integrals. (more)

The scanning matrix A is Mby N, where typically M is much smaller than N, hence the material is highly under sampled. Once we have the values in A and b, we apply CS techniques in order to reconstruct our image x. For our research, we wanted to simulate this process in order to find the best sampling and reconstruction techniques. So, given a beam width, a grid, and a set of angles, we created a program to generate a scanning matrix A. Once this is done, we multiply our image by A to obtain the values in b. We then try to recover the image using the ideas of CS through the matlab package NESTA. Several different scanning matrices A were used on two different test images. Several parameters needed to be adjusted for each simulation, and the results were positive. The image shown is recovery of a 128 x 128 image using 5 scanning angles (0 deg; +/- 30 deg; +/- 60 deg), with a beam width of 1.3 pixels and a step size of 0.8 pixels between the parallel beams. We hope to further this research in collaboration with the Pacific Northwest Laboratory. See Mentor's website for more details.  (less)

 

 

 

An Algorithm to quickly compute the approximate distance field in a proximity to a point cloud

Jenny Tabat (Mathematics, University of South Carolina) mentored by Prof. Peter Binev

My research project is to create an algorithm to quickly compute the approximate distance field in a proximity to a point cloud. The goal is to implement this algorithm in parallel on GPUs so the distances can be quickly generated for a large point cloud. An octree is used to sort the points into cubes by using several dyadic splits until the cubes are the required size. For each vertex in the domain, Principal Component Analysis will be used on the local point cloud to find the rectangle that approximates the point cloud. The distance will then be found from the vertex to the rectangle. This algorithm can then be used to initialize an Eikonal equation solver to generate a distance field on the entire domain. See Mentor's website for more details.

 

 

 

Mechanical Behavior of the Heart Post-Myocardial Infarction  (Undergraduate Project)

Lauren Wolf (Biomedical Engineering, University of South Carolina) mentored by Prof. Tarek Shazly

A myocardial infarction is caused by inadequate perfusion to an area of the myocardium, and produces lasting changes in the material behavior of the heart. At the epicenter of the damage lies the zone of infarct/necrosis, at which tissue function cannot be recovered. A zone of ischemia surrounds the zone of infarct and consists of tissue that is damaged and may not function optimally; the necrotic zone may expand into the ischemic zone if the mechanism of myocardial injury is not corrected. (more)

Post-infarct, contraction of the heart results in a bulging deformation of the damaged zone of tissue, potentially creating a zone of stress that radiates from the damaged tissue. The objective of this project is to model the mechanical behavior of this tissue in three dimensions as an anisotropic, hyperelastic material in COMSOL through utilization of the structural interaction module. The model will compare average stress in the healthy heart tissue and stress in the infarct region as the size and shape of the infarct change, and how these changes manifest in the mechanical properties of the material. The model will also examine the changes in material properties in an infarct precipitated by occlusion of the left circumflex artery. See Mentor's website for more details.  (less)

 

 

 

Accelerating simulation of Virtual Test Bed software by adopting the Latency Insertion Method in a high performance computing environment

Huaxi Zheng (Electrical Engineering, University of South Carolina) mentored by Prof. Roger Dougal

The Latency Insertion Method (LIM) is adopted to accelerate simulation of the Virtual Test Bed (VTB) software in a high performance computing environment at USC. VTB is a multi-discipline simulation tool currently being developed at USC. VTB utilizes matrix operations in order to solve sets of simultaneous equations. Matrix operations do not scale linearly and so as the size of the system increases, the required time to solve for the solution to the system increases as a cubic operation O(N3). (more)

LIM is an explicit time-domain simulation approach used to partition a large system model into multiple smaller models. By breaking a large matrix into several smaller matrices we can reduce the overall runtime. In most electrical power system models cables or transmission lines typically connect the generators to the various loads.  The presence of these cables and transmission lines provides an opportunity to utilize LIM in order to break a large system into several smaller subsystems. The cable and transmission line components in VTB are currently implemented in such a way as to utilize LIM internally, which permits the components to become "LIM connectors". Thus, the larger system can be broken at these points into smaller subsystems that can be solved in parallel.

The work being proposed is to modify the ANSI C Code that is currently generated by VTB so that it can execute in the MPI environment. It is our hope that this work would result in reducing the total runtime required in order to solve very large system models. We will evaluate the performance of the MPI parallel computing environment against a typical Windows environment. Additionally we would like to develop a service that runs in the MPI environment that can accept VTB simulation jobs. VTB users could submit their schematic to this service which would then automatically generate ANSI C code for the MPI environment, compile this code, and finally execute the simulation. This service would notify users when their job has completed and the user would then download the results. If time permits, we will investigate an automated approach to the placement of LIM components within a system such that the user would not need to intercede in order to achieve parallelization. See Mentor's website for more details.  (less)