CodeTime FAQ



What are the Benefits of the CodeTime Platform?

 

What are the Core Ideas of the CodeTime Computation Model?

 

What is the theory behind the platform?

 

What are the formal semantics of the computation model?

 

Why not just a language, why a whole platform with a new operating system, development environment and all?

 

How's CodeTime different from DataFlow?

 

How's CodeTime different from HPF, Sisal, and Linda?

 

Is this real, or yet another idea in the crowd of parallel languages and abstract models?

 

What is the OS like?


How does a CodeTime program use TCP connections, or graphics hardware?

 

What kinds of computers does CodeTime run on?

 

What's in an OS Instance; how is it different from a virtual machine?

 

Is there a real implementation of an OS Instance?

 

What is the programming language like?

 

What does a CodeTime program look like?



Back To Home

 

 


Q: What are the benefits of the CodeTime platform?

A: The CodeTime platform attempts to satisfy the three goals of 1) high programmer productivity 2) "write once, compile once, run high performance anywhere" and 3) wide acceptance.

It separates application-domain concerns from hardware concerns, allowing programmers to concentrate on domain-specific knowledge, and allowing hardware specialists to carefully craft back-end compilers and run-time schedulers, which are used across all applications.

It eliminates synchronization operations, locks, guards, tuple operations, etc, and replaces them with declarative scheduling constraints. The declarative scheduling constraints allow a back-end compiler and a run-time system to choose the size of code-segments and the size of data-chunks given to those code-segments. This tunes the granularity to fit the hardware.


Q: What are the Core Ideas of the CodeTime computation model?

A: Explicit constraints; scheduler plug-ins to separate application from hardware; bundling control-data with work-data; dynamically-created task-processors.

 

Q: What is the theory behind the platform?

A: The framework paper gives a context within which to understand how CodeTime fits in with other languages, abstract models, and virtual machines.

 

Q: What are the formal semantics of the computation model?

A: The formal semantics paper gives a full description. It includes extensions to Big Step semantics that adapt it to CodeTime's circuit elements.


Q: Why not just a language, why a whole platform with a new operating system, development environment and all?

A: The three goals of high programmer productivity, "write once, compile once, run high performance anywhere", and wide acceptance can only be achieved if the underlying dependencies are satisfied. The Integrated Platform paper describes these dependencies and shows how the goals cause the inclusion of each of the platform elements.

 

Q: Isn't this just another idea in the crowd of parallel languages and abstract models?

A: CodeTime's computation model enables a fundamentally new class of programming languages. It introduces the notion of declarative scheduling constraints, which in turn enables separation of application concerns from hardware concerns. This is a major goal of parallel programming that other approaches have not satisfactorially achieved.

The theoretical framework paper and the operational semantics paper detail CodeTime's core differences and put them into perspective. The platform paper and the case for an integrated platform paper show how the ideas proposed in CodeTime let the platform achieve the goals.


Q: How's CodeTime different from DataFlow?

A: Dataflow, Tagged-Token Dataflow and Large Grain Dataflow lack program-command level scheduling information. A program must use the underlying language's scheduler, which uses fine-grained scheduling constraints. Thus, the languages don't provide the information that hardware schedulers needed to change task-sizes. The result is that effective scheduling is problematic for Dataflow based languages and computation models.

In contrast, CodeTime explicitly declares the program-command-level scheduling constraints. This gives the hardware scheduler in the back-end compiler and the run-time system the information they need to tune the task-sizes to the hardware characteristics.

CodeTime's declarative scheduling constraints also enable scheduler plug-ins called "dividers" which are a key separation-of-concerns mechanism. Dividers are written by application programmers and perform the action of breaking up a data structure into smaller pieces. The run-time's scheduler meanwhile monitors the status of the hardware and decides the size of each piece and where each piece goes. This puts the "how" of dividing into the application programmers hands, and the "when", "where" and "size" of dividing into the run-time schedulers hands, which was written by hardware experts.

The context within which to understand how CodeTime's computational model's semantics are different from Dataflow's is given in the theoretical framework paper.


Q: How's CodeTime different from HPF, Sisal, and Linda?

A: HPF only allows the use of canned scheduling constraints that are built in to specific language constructs such as DOALL loops. Sisal also gains most of its larger-grained parallelism from the technique of canned language constructs, being unable to gain much advantage from the underlying Dataflow due to the scheduling issues. Finally, Linda's tuple operations and descendent coordination language's coordination mechanisms appear to be quite similar to declarative scheduling constraints. However, closer inspection reveals that they are rather mechanisms that a program uses to build a custom scheduler. The constraints enforced by the implementation are still not readily available to a back-end compiler or run-time system.

The theoretical framework paper defines categories of languages, from the perspective of how they treat scheduling constraints. It then shows that CodeTime creates a fundamentally new category of language. The paper elucidates the details of how CodeTime differs from these other languages.

 

Q: What is the OS like?

A: At the heart of the OS is the "processor" abstraction. Each major kind of OS service is encapsulated in a processor, with which a program interacts. For example, a file is modelled as its own processor. A program reads from the file by opening a connection to the file-processor, then issuing the "read" command.

All entities in the Virtual Server are processors. This includes OS services, running programs, and hardware. All entities outside the Virtual Server are external processors.

The two most important processors are the Name Discovery Processor and the External Listener processor. Name Discovery is how hardware such as printers and CD-ROMs are found, as well as "default" configuration files for users (act like environment variables), and files with particular meta-information patterns and/or keywords. Meanwhile the External Listener enables outside machines to initiate connections and remote users to log in.

So, the OS is quite simple. It consists of about six "built in" processors plus the machinery for creating file processors, program-invocation-processors and for connecting to outside processors (web-servers, clients, and so on). Each processor has a well defined set of commands and protocol. This interface should be fairly simple to implement over the top of most existing operating systems.

The CodeTime OS paper gives more details.

 

Q: How does a CodeTime program use TCP connections or graphics hardware?

A: Normally an application will not be aware of what means is being used to form connections, nor what hardware is being used for display. To an application, communication with external processors (such as web-servers and displays) is via a "processor" symbol placed in the program. Each hardware platform implements the action of the processor symbol in a hardware-specific way. Normally, an application only needs the function of connection, the exact protocol is seldom important. When it is, then stubs are provided by which a CodeTime program can communicate with an external program written in a standard language such as C++ or Java, which then implements the hardware-specific operations.

Rather than exposing graphics hardware, programs instead connect to a "User Interface Processor". One of these is created at the program's request, using information about the user who caused the program to run. The OS Interface has a dedicated processor that creates new UI processors. A program then receives keystrokes, mouse clicks and so on from the UI processor and sends display commands to it. A UI processor can be created in different flavors that understand different display protocols, such as OpenGL, DirectX, X-Windows, or MS Windows commands (thin wrappers around library calls).

The OS paper and describe the practice and implementation in more detail.

 

Q: What kinds of computers does CodeTime run on?

A: In theory, any kind. In current practice, any collection of general purpose processors connected by a network, anything from multiple cores on a single chip to the 160,000 processor Blue Gene. A few "exotic" processors should also do well under the CodeTime abstraction, such as Stream, reconfigurable, and massively threaded processors. It is unclear how well SIMD, "Quantum", and so on will do.

 

Q: What's in an OS Instance; how is it different from a virtual machine?

A: Short answer: it includes all parts of the OS. Longer answer: it collects all the processors and machines together under a single abstraction, making them appear to be a single server. It models all parts of a system including persistent storage, interaction with other systems, creation of runs of programs, and so on. Normal virtual machines model just the physical processor with "escapes" to the OS. The CodeTime OS Instance is more of a virtual system, including hard-drives, OS and communication.

The OS paper describes OS Instances in more detail.

 

Q: Is there a real implementation of an OS Instance?

A: One is in progress. A proof-of-concept Run-Time system is done and working; a more useful one is about half done as of March 2006. The Proof of Concept Run-Time paper talks about the proof-of-concept run-time.

 

Q: What is the programming language like?

A: BaCTiL is a visual language that describes a circuit. Each circuit element is a segment of imperative code, or a pin, or an instance of another processor. The program as a whole is treated by the OS Instance as a "blueprint" of a processor. When a program is "run" the OS Instance creates a new processor that has the program-code embedded in it, and connects its pins.

The programmer places circuit elements, wires them together, and fills in the imperative-code elements, which are called code-units. The imperative language in the code-units is like a very simplified version of C or Java. It has only basic math, assignment, and conditionals. However, it has no loops. Loops are created by wiring an output to an up-stream input.

The BaCTiL paper describes BaCTiL in more detail.

 

Q: What does a CodeTime program look like?

A: BaCTiL is a heirarchical visual language. Leaf-level circuit elements containing programmer code, called code-units, are placed inside heirarchy-units. Heirarchy units can then, in turn, be placed inside other heirarchy units. Code-units are always leaf-level, but they may be freely placed alongside heirarchy units and wired to them.

The Matrix Multiply example shows many elements of a BaCTiL program.

 

 

List of all CodeTime papers

List of CodeTime presentations

Back to Home