Distributed Computing Approach for High Resolution Medical Images

Fumio Aoki*, Hiroki Nogawa*, Haruyuki Tatsumi*, Hirofumi Akashi*,
Nozomi Nakahashi*, Guo Xin**, Takashi Maeda***
*Information Centre of Computer Communication, Sapporo Medical University
South-1, West-17, Chuo-ku, Sapporo 060-8556, Japan
Fax: +81-11-642-7030, Email: kaku, nogawa, tatsumi, hakashi, nakahasi@sapmed.ac.jp
**Shenyang Institute of Administration, Shenyang 110032, China
*** Hokkaido Information University, Ebetsu 069-8585, Japan


Abstract
To use the high resolution medical image sets provided by VHP(Visible Human Project started by National Library of Medicine, USA, in 1986.) for anatomy education, we need powerful computers to handle the large data sets upto 15-40GB. In this article, we propose a distributed computing approach implemented on low-cost PCs connected with high performance network, instead of expensive super computer. The implementation is based on a three layer architecture, consisting of an OpenGL GUI of image viewer available on many different OS platforms, distributed data processors each of which handles a subset of VHP image data on memory and works independently, a central controller which receives requests from the viewer, controls the distributed processors, collects the results generated by the processors and forwards the high resolution image to the viewer. Experiments of distributed computing on 35 PCs proved the proposed solution can response in about 2 seconds on a request of retrieving a 12MB image from about 15GB VHP image data, this is over 1000 times faster than a straightforward method on one PC.


Keywords
Network computing, intranet/internet applications, parallel processing, medical imaging, VHP, distributed systems, image processing, OpenGL


1. Introduction
VHP(Visible Human Project) is a world wide project began in 1986 by NLM (National Library of Medicine, USA), planning to build a digital image library of volumetric data representing a complete, normal adult male and female. Doctors, scientists and engineers from many countries have contributed and are contributing their great efforts to this project.

The Department of Anatomy, Sapporo Medical University, School of Medicine has involved in VHP for several years, our main work is the efficient utilizations of these large image data for educational purposes, especially in anatomy morphological visualization.

Here is a review over the specifications of VHP morphology image data sets in Table 1,2.

Table 1: Male Data Set(about 15GBytes:CryoSect.)
ModalityIntervalbit/pixelwxh(pixel)Images
CryoSect.1mmRGB242048x12161878
MRI4-5mmGrey12256x2561029
FroznCT1mmGrey12512x5121877
Norm.CT1-5mmGrey12512x512522


Table 2: Female Data Set(about 40GBytes:CryoSect.)
ModalityIntervalbit/pixelwxh(pixel)Images
CryoSect..33mmRGB242048x12165186
MRI4-5mmGrey12256x256985
Norm.CT1mmGrey12512x5121734


We have previously developed a VHP image viewer runing on NeXTSTEP operating system, but it only can dissect VHP data along X-Y, Y-Z or Z-X plane by reading the previously generated data sets along each plane. Althrough it had a good portability on BSD UNIX platforms, it was not able to generate full-sized arbitrary dissection images and to display 3D views of these images.


Figure 1: Overview of Our Solution

In the field of handling high resolution medical images, the problem was the storage space to keep the data in the past, but currently the problem is the real-time processing power to visualize them as we want.


2. The Demand
Anatomy education is based on visual information from dissections of animals or human bodies. The better understanding can be achieved by representing 3D reconstructions rather than 2D photographs. With the image data source from VHP, we need a high performance image processing system which can handle both full-size and limited size images efficiently. The system should be,

(1) easy to use graphical user interface,

(2) easy to understand 2D and 3D visualization,

(3) able to perform real-time or short-time visualization,

(4) able to perform dissection along arbitrary direction,

(5) possible to handle detail VHP image data fast,

(6) possible to reconstruct 3D volume images,

(7) available over intranet and internet,

(8) available on different computer platforms and OS (Operating System),

On the other hand, the design and implementation must be done within our limited resources such as computers, budget, programming tools, network environment, etc. In fact, we designed and implemented this system with no addtional budget and equipments, except the payroll costs of regular staff.


3. Our Solution
For the real-time retrieval of dissection images along arbitrary directions from VHP image data set, we propose distributed computing architecture which has three layers shown in Figure 1. This new architecture and implementation has good portability based on OpenGL rendering libraries, and it generates dissection images not only orthogonal with coordinate axes but also along arbitrary direction. A parallel processing solution is included for creating full-size dissection images from original VHP data sets. The implementation provides better visibility and understanding for our anatomy students based on the 3D rendering algorithm and better maneuverability using both keyboard and pointing device.

The upper layer is interface layer implemented using well known graphic library OpenGL, called GLview which involves interactive operations of users. The middle layer is control layer, called Gboss which communicates with both the interface layer and data processing layer (the lower layer). According to the request messages from GLview,
Gboss sends commands to all the distributed data processors in the lower layer and collects image data pieces from them. The lower layer is data processing layer, called Gserver which works according to the messages from Gboss to generate image pieces and to write the data to each shared disk.


3.1. GLview
GLview of interface layer has one main window, three 3D view windows and three full-size 2D view windows. The limited size views of Trans (transverse), Long (longitude) and Sagit (sagittal) in main window are used to select dissecting positions. Cross lines over Long view indicates the orthogonal dissections of transverse and sagittal views, cross lines over Sagit view indicates those of transverse and longitude views. These images change in real-time when you change the center positions of these cross lines using your pointing device. Figure 2 illustrates these 2D and 3D dissection views.



Figure 2: 2D and 3D Dissection View of GLview

With right click over main window area, a pop-up menu appears which has eight items: three 3D view window items for dissection image display in a 3D space, three 2D view window items for 2D full-size VHP image display, and a dissection mode selection item for orthogonal dissection mode, arbitrary dissection mode and volumetric dissection mode, respectively. The last item Quit is for terminating the viewer application.

In the orthogonal dissection mode, dissection images along X-Y, Y-Z or Z-X planes are generated and displayed in real-time by clicking over the images of Long view and Sagit view. In the arbitrary dissection mode, a horizontal line and a vertical line are displayed over two Trans view images of main window, you can get Long or Sagit dissection images along arbitrary direction immediately by dragging the ends of these lines up / down or left / right. When changed to volumetric dissection mode, a rectangular appears by dragging with left button pressed over Long image using pointing device, which defines 3D image volume edge along X and Y axes. Another rectangular over Sagit image can be created to define the edge along Z and Y axes in the same way.

The images in 3D view windows can be scaled up and down in 3D space with pressing PageUp and PageDown keys of keyboard. These images rotate around X axis in 3D space with pressing Up and Down arrow keys or dragging pointing device up and down, and around Y axis with pressing Left and Right arrow keys or dragging left and right, with left button pressed.

Table 3: User Interface of GLview
GroupDetail
3 windows
for 3D view
Longitude dissection plane
Sagittal dissection plane
Volume dissection cubic
3 windows
for 2D view
full-size Transverse (2048x1216)
full-size Longitude (2048x1878)
full-size Sagittal (1216x1878)
3 dissection
modes
orthogonal dissection along X-Y-Z
arbitrary dissection (Long / Sagit)
volumetric dissection along X-Y-Z
8 pop-up
menu items
selection of above windows & modes
quit the application program



3.2. Gboss
Gboss creates control procedures to start the process of each Gserver according to the requests from GLview. When GLview needs full-size VHP image, it sends message to Gboss including command string (Gslice), the dissection Stype (Trans, Long or Sagit), and two ends of the dissection line (Slice1, Slice2). After Gservers finish their process, Gboss collects the data pieces together to create a full-size image and forwards it to GLview. Because the large data stream goes from Gboss to GLview, a parameter packet indicating the intended data size comes before the RGB24 image data stream (Figure 3).


Figure 3: Messages between GLview and Gboss

Gboss communicates with GLview in a way of socket connection which is basically available in current TCP / IP network. As a result, the interface layer GLview and the middle layer Gboss can be setup at anywhere over TCP / IP reachable network.

The Gboss communicates with Gserver through shared disks which are accessable by both Gboss and Gservers based on UNIX NFS (Network File System). The Gservers are completely controlled by Gboss, the messages include Gload of loading VHP image data subset onto each Gserver's memory, Gslice of extracting dissection image pieces and Abort of terminating Gservers' executions. Figure 4 shows the message exchange procedure and the contents of task messages between Gboss and Gservers.

Gboss has three command line options {start | stop | accept}. The options start and stop are executed independent to GLview, these only communicate with Gservers for loading VHP image data and terminating Gservers' executions. The option accept is used to communicates with both GLview and Gservers, waiting the request messages from GLview and controlling Gservers according to these messages.




Figure 4: Command Execution Procedures


3.3. Gserver
All the Gservers have the same server program running, althrough they process different subset of VHP image data. The program are working according to the task messages from Gboss via shared disk. Commands are started by sending task data structure file gtask.cmd and start trigger file start.id. Gserver writes back image slice.raw and finish identifier file finish.id onto its shared disk, when Gserver has finished its process. Figure 5 illustrates the message exchanges between Gboss and Gserver.


Figure 5: Message Transfer between Gboss and Gserver

Gslice message defines the way of dissections with Stype, Slice1, Slice2, when Stype is Trans, only one Gserver will be started to get transverse images, when Stype is Long or Sagit, all the Gservers are started to do an arbitrary dissection for longitude or sagittal images based on Slice1 and Slice2.

The data generation for Gslice message is done on Gservers' memory parallelly for the best responsibility. Parallel projection algorithm is used to ensure the algorithm simplicity and to maintain the same width between original VHP image and the generated images (Figure 6,7). This simple algorithm for the dissection along arbitrary
direction performs linear transformation between space coordinate and image coordinate to extract the nearest pixel value. The transformation are explained by expressions (1),(2).


Figure 6: Linear Algorithm for Longitude Dissection

Hcoord = slice1 + (Wcoord_-_slice1)__(slice2 - slice1)
Vpixel = nearest(pixel[h,w], coord[h,w])--------- (1)



Figure 7: Linear Algorithm for Sagittal Dissection

Wcoord = slice1 + (Hcoord_-_slice1)_(slice2 - slice1)
Vpixel = nearest(pixel[h,w], coord[h,w])--------- (2)


4. Experiment
As we were seeking supports or additional budget to implement a testbed for our proposal, fortunately the Information Center of Computer Communication bought 50 sets of G3 Macintosh computers (Apple Inc.) with Mac OS X server for the information education of medical students. So we got a lucky chance to run our program on a "super" parallel computer equiped with 50 G3 CPUs, 25GB memory (512MB x 50), 420GB (8.4GB x 50) storage space and 50 independent UNIX operating systems.

The 1878 original image data (VHP Male) about 15GB are divided into 35 subsets, each Gserver keeps 50 or 100 images which amount upto about 373 MB or 747 MB. The VHP image data can be located on any host and any disk volumes, even on different domain over internet or intranet as long as reachable via NFS, but the experiment told us that it will take more time to load images over the network, because of the bottleneck of network transfer speed.

The interface layer is implemented in ANSI C of UNIX systems with OpenGL library and GLUT toolkit library(http://www.sgi.com) for the better portability. Although it is originally programmed on SGI machine with IRIX 6.4 operating system, it has been successfully recompiled using Visual C++ Ver.4.0 to operate on Windows9x/NT4.0 that supports OpenGL32 and GLUT32 toolkit libraries (http://www.microsoft.com) (Figure 8) and on Debian Linux(http://www.debian.org) UNIX systems installed with Mesa3D library(http://www.mesa3d.org) and GLUT toolkit library.


Figure 8: User Interface on Windows98 with 3D Rendering Windows

In our experiments, we conducted two ways to load VHP image data, the one is to load them from a central host storage that keeps all the VHP data. The other is to load from the local disk connected to each Gserver. Even it is necessary to distribute the data to each Gserver's disk before the operation starts, the result said the latter has a better performance for Gload command, because of the limited access speed of the central storage disk. The average loading time is 1200 seconds and 42.5 seconds respectively. The latter way seems to be a smart choice to avoid concentrated access overhead.

Table 4: Performance Measurement
Operation Process Time
of Gserver
Data Transfer
Time of Gboss
Involved Data
by Gboss(byte)
Involved Data
by Gserver(byte)
Loading42.5 sec.N/AN/A 373,555,200 ~
747,110,400
Transverse Dissection2.0 sec.0.8 sec.7,471,1047,471,104
Longitude Dissection
(35 CPU)
1.4 sec.1.1 sec.11,538,432 307,200 ~
614,400
Longitude Dissection
(1 CPU)
2,139 sec.1.1 sec.11,538,432 307,200 ~
614,400
Sagittal Dissection
(35 CPU)
1.7 sec.0.6 sec.6,850,944 182,400 ~
364,800
Sagittal Dissection
(1 CPU)
2,129 sec.0.6 sec.6,850,944 182,400 ~
364,800


With a straightforward method of retrieving arbitrary dissection slice with 1 CPU including loading and processing VHP image data from local disk, it costs 2,139 seconds for longitude dissection and 2,129 seconds for sagittal dissection. By using the proposed parallel processing method, it only takes 1-2 seconds, that means our solution
has about 1,000 times faster performance with 35 CPUs than that with 1 CPU.

As shown in Table 4, the time consumption was measured separately for each process. The process time of Gserver includes the time of getting task message and start command, extracting dissection slice data, writing to each shared disk, and writing finish message. The data transfer time of Gboss includes the time of getting finish messages, reading slice data from shared disks, calculating slice locaton and filling the full-size image, and forwarding it to GLview through socket connection. The involved data size of Gboss is for the packet transfer to GLview, and the involved data size of Gserver is for the data extraction in each Gserver.


5. Conclusion
In this article, a new architecture and a portable implementation of high performance VHP image processing system is proposed. The high performance comes from the ideas of (a) three layer architecture of working independently and cooperatively based on TCP/IP network protocol that is basically available over intranet / internet, (b) a flexible design of user interface GLview that provides 2D and 3D visualization for better understanding of anatomy education, (c) concentrated management Gboss over parallel processors that ensures high responsibility to the requests from the interface layer, (d) a fast retrieval scheme realized based on a high parallel processing of independent UNIX computer, (e) a portable coding using OpenGL related libraries and essential C functions normally available for UNIX-OS. We completed the total image processor coding and conducted a test run with the environment available in our Information Center. This proved the validity and efficiency of our ideas.

For the future work, (a) real-time morphology segmentation by parallel processing, (b) 3D visualization based on volume rendering technology, (c) performance improvement by modifying NFS data transfer to TCP/IP socket connection, and etc should be done.

Also, we have designed a parallel computer with 50 processor of running Linux OS on low-cost PC core, which can do the above work more efficiently. Hope to have a chance to report our progress again in near future.


Acknowledgement
This research has been partly supported by the Science and Technology Agency, and the Ministry of Health and Welfare, Japan.


References

[1] Fumio Aoki: Next Generation of Advanced Medical Applications on Internet, Medical Care & Computer, Vol.10, No.10, pp.9-13, Oct.1999.

[2] Tatsumi, Nogawa, Aoki, Nakamura, Nakahashi, Akashi: Next Generation Internet and Medical Virtual Private Network, The 19th Joint Conference on Medical Informatics, Proceedings pp.36-39, Nov.1999.

[3] Nogawa, Tatsumi, Nakamura, Kato, Takaoki: An Application of an End-User Computing Environment for the Visible Human Project, The Second Visible Human Project Conference, Proceedings pp.99-100, Oct.1988.

[4] Aoki, Tastumi, Nogawa, Akashi, Nakahashi, Guo: A Parallel Approach for VHP Image Viewer, Internet Workshop 2000, Intl. Workshop on APAN and its Applications, Proceedings pp.209-214, Feb. 2000.