The CSIM simulator provides animation and interactive execution. Interactive post-processing tools support intuitive data visualizations and analysis. Over 400 pre-defined models are included in the standard CSIM distribution.
CSIM's development has focused on speed and memory efficiency for large system scalability, as well as methods for abstract modeling while maintaining accruracy. It has been successfully applied on projects containing hundreds of processors, thousands of application tasks, or millions of entities.
The simulation environment consists of a modular set of optional tools and libraries:
Due to its versatility, CSIM is used for a wide variety of applications. As one example, consider hardware/software architecture modeling.
Example: Hardware/Software Architecture Performance Modeling Domain
CSIM represents a balanced approach to the design of parallel software and hardware systems. Several libraries are available for common processing systems. A parallel programming tool assists in analyzing and partitioning application flow-graphs, mapping and scheduling their constituent tasks onto hardware elements, and generating the respective parallel software programs.
For the hardware/software architecture modeling domain as an example, the CSIM environment is typically used as follows:
A model of a candidate hardware system is assembled from the library of processors and network models. This entails specifying the interconnection of processor elements, shared memories, and other I/O resources through various types of buses, crossbar switches, bridges, and links to form an overall model of the system architecture. This may also include modifying some of the component models to express new behaviors for new or proposed components. The bandwidth and latency of each communication link are specified.The example above is merely one possible domain of application for CSIM. There are many others. Examples shows this and several others.
The software application is expressed initially as a Data-Flow-Graph (DFG) that reveals potential parallelism in the algorithm. The DFG is processed by the SCHEDULER utility to produce a set of programs, one for each processor node, that represent scheduled partitions of the DFG which collectively accomplish the application in an efficient parallel manner. The programs are abstract, being at the nodal task call and send/receive level that is appropriate for abstract performance modeling. Such pseudo-code can be expanded into compilable source-code for the target systems. The "pseudo- code" programs can be produced by various other means as well. For example, sometimes designers know exactly what each processor should do without using a DFG.
When executed within the CSIM simulator, the processor models interpret and respond to software applications mapped to them. Specifically, component models can send, receive, wait-for, and pass messages through their ports, and they can delay for computations, -- all according to the sequence of instructions in their individual application programs.
Prior to simulation, a general purpose router looks at the architecture topology and builds a routing table for sending messages from processor-to- processor across the modeled network. This allows the software application programs to be portable across architectures, since the programs need only specify the logical destination of an outgoing message, without need of knowing how it gets there. The pathway is referenced from the routing table for the given architecture.
Once the application DFG has been mapped and the software has been generated for the processing nodes, then the CSIM simulation of the software running on the candidate system architecture is run. This produces a myriad of information such as the utilizations of the various resources including processor elements, buses, links, and memories, as well as time-line histories of processing and communication events. The time-lines can be viewed using the XGRAPH post-processing visualization tool. The designers can then analyze the performance of the system and their software mapping.
The visualization tools provide much enlightenment about the actual behavior of the system. Designers then suggest improvements to the mapping, architecture or both. Such improvements are then iteratively tried and tested to optimize or verify the overall design.