The TMT Software System Architecture is split into two parts called the Program Execution System Architecture (PESA) and Observation Execution System Architecture (OESA). The primary function of the PESA is to enable efficient management of astronomical programs from proposal creation to data delivery. The OESA facilitates effective observation of astronomical objects, as well as proficient command, control, and monitoring of all observatory functions. The job of the PESA can be summarized as getting observation descriptions to the OESA and delivering generated science data to external data centers. The basic components of PESA and OESA and their relationships are shown in the following figure.
The three major subsystems realizing PESA are the Science Operations Support System (SOSS), Data Processing System (DPS), and Data Management System (DMS). SOSS is the aggregate of software applications used to manage high-level science operations workflow from proposal preparation up to observation execution and data delivery. SOSS includes tools to support: (1) instrument simulators, proposal preparation, handling, review, and time allocation; (2) observation preparation, handling, review, and queuing; (3) observation scheduling; (4) observation execution and problem resolution; and (5) data delivery.
DPS enables the removal of atmospheric and instrument signatures from data produced by TMT science instruments, and it provides the tools needed to implement a long-term trending data quality assurance process. DMS provides the mechanisms and interfaces needed to capture, time-stamp, describe, store, access, visualize and (in some cases) archive all scientific information flowing through the TMT system (Observing Database). The DMS also provides the mechanisms and interfaces needed to capture, time-stamp, store, access, and visualize all engineering information flowing through the TMT system (Engineering Database).
The core of OESA is the Executive Software (ESW) subsystem, which includes the software tools necessary for efficient TMT operations. Specifically, the ESW contains the master system sequencer, which enables synchronized control of all TMT subsystems. ESW includes user interfaces for control and monitoring of TMT subsystems and overall environmental conditions. This subsystem enables the execution of observations as described in the operations plan for on-site and remote observers.
While the OESA command-and-control architecture is hierarchical, the communication scheme is highly parallel. The transition from one system configuration to another is a result of a sequence of activities initiated and coordinated by a master sequencer, the Observatory Control System. This coordination is accomplished in concert with a set of lower tier sequencers. The master sequencer establishes the appropriate OESA command-and-control hierarchy for each requested observation. In a logical sense, different hierarchical relationships may be established for different instrument observing modes.
The architecture partitions the software into six logical groups of related functionality called Principal Systems to aid reasoning about the software system as shown below. Each principal system is focused on specific, easily-identifiable functionality and is itself a collection of other software components or systems. Viewing the system at this level allows one to think more easily about flow of control and where software functionality exists in the system. This view shows communication is hierarchical and flows down from Observatory Controls to the other principal systems. This command communication is low-bandwidth by design; any high-speed communication occurs within a single principal system. For instance, the high-speed adaptive optics loops occur only within the AO Controls Principal System.
The architecture is expanded in the following figure showing how the actual subsystems are organized into the PESA, OESA. This figure drills down to show the subsystems that represent major hardware and software components within the Principal Systems. At the top of the figure are the observing user interfaces and the software components within Observatory Controls that sequence and synchronize the activities of the other systems to generate the user’s desired science data.
The integration of these software components across systems requires software infrastructure that is outside the scope of the individual components themselves. The horizontal green bar dividing the figure above in half represents this software infrastructure. The idea of shared software infrastructure based on a set of services and associated software focused on integration is a successful strategy in large observatory software architecture. TMT calls this software TMT Common Software (CSW). CSW is the implementation of the technical software architecture.
Common Software is a collection of software and services. A service is a set of related software functionality together with behavior and the policies that control its usage. TMT CSW uses external packages (i.e., software not developed by TMT) as the basis of the implementation of the CSW services. Abstractions and wrappers are present between the CSW services and the external packages to enable some isolation from specific product choices. For a component programmer integration of a component with TMT requires the use of a service-oriented API and library code that must be included with the component. The table below is a list of services provided by CSW to enable integration of the subsystems along with a brief description of each.
|Authentication and Authorization Service||Centrally manage user authentication/access control|
|Location Service||Locate and register component connectrion information|
|Command Service||Support for receiving, sending and completing commands in the form of configuratrions|
|Event Service||Publish/subscribe system for demands and other transient events|
|Alarm Service||Support component alarms and component and subsystem health|
|Configuration Service||Manage system and component configuration file changes|
|Logging Service||View, capture, and store local and distributed logging information|
|Database Service||Access to a shared, centralized, relational database|
|Time Service||Standards-based, precision time access for synchronization|
Several examples of each needed service can be found as open source or commercial products. CSW uses these products to integrate the services into one cohesive layer and to isolate them in order to allow replacement with minimal impact on the software system built upon them. (TMT has a requirement to support a 50 year lifecycle.) This is accomplished by specifying the behavior of each service and using a TMT-provided Application Programmer Interface (API) that isolates the details of the specific product from the TMT applications and components to the extent possible.
CSW was identified early in the project as critical infrastructure and it was the first software project to start construction. The Final Design Review for CSW was successfully completed in December, 2016. The OSW vendor, ThoughtWorks company based in Pune, India, was selected through a process executed by India TMT Coordination Committee (ITCC) based at India Institute of Astronomy (IIA). Construction started in February, 2017 and concluded in July, 2019. The Pre-ship Review for CSW was passed in August of 2019. The CSW software is available on the TMT GitHub site at: https://github.com/tmtsoftware/csw. The CSW team is shown below at a recent inception meeting in Pune.
Executive Software (ESW) is the implementation of the Observatory Controls Principal System in the OSW software system architecture. All software components within ESW are deployed and execute at the telescope site or remote observing site. The ESW subsystems provide the core functionality needed to enable the TMT operations modes including: PI-Directed observing, Pre-planned Service Queue, and conditions-based queue scheduled observing. Sequencing is the term used to describe the operations the software system must do to control the telescope subsystems and the instruments in the specific order needed to collect science data.
The Observatory Control System (OCS) subsystem of ESW is the central engine that provides the sequencing functionality. OCS provides the interface between the planning and scheduling software implemented by the OSW Science Operations Support Subsystem (SOSS) and the sequencing software as implemented by OCS and other control system subsystems. Planning information created during Phase 2 and contained in the Observing Database, a part of SOSS, is used to generate the Observing Block and the Sequence, which is passed to the ESW OCS for execution. The following figure shows a configuration of the OCS for an imaginary instrument observing mode.
ESW sequencing builds on CSW-defined components including the Sequencer, which is one of the Observing Mode Oriented Architecture components along with HCD, Assembly, Container, Service and Application. The Sequencer is programmable by running a “script” that can vary based on the observing mode in use. Scripts are stored in the ESW script repository. The Sequencer executes the Sequence by examining and processing each step of the Sequence and taking any actions needed to match the step such as sending commands to other components. For most observations (i.e. science, calibration, engineering) a master OCS sequencer exists that controls other subsystem sequencers (such as TCS and IRIS) that control Assemblies in each subsystem.
Executive Software passed its Final Design Review in May, 2019. The development was planned in two phases, ESW1 and ESW2, to give priority to the components that are essential to start the software design and development of the other observatory subsystems. The India-based team started construction of ESW1 in May 2019 with a planned completion at the end of 2021. The focus of this phase of ESW construction is the Sequencer and other infrastructure needed by other TMT software teams.
The TMT Software System can be broken down into software that is involved in the execution of an observation and software that either precedes or follows observation execution. It is useful to split the software this way because so much of what makes TMT software unique and challenging is associated with observation execution at the telescope. In the TMT Software System Architecture, the PESA systems support science operations proposal, observation preparation, and scheduling. Following observation execution science data must be stored, quality checked and forwarded to TMT Headquarters and external data centers.
A significant result of the TMT Operations Plan is agreement and definition of the operations modes that TMT will support and a workflow that describes the steps that occur when observing with TMT from an astronomer/TMT science user’s point of view. This is important for the TMT Software System and SOSS, because the software system design must demonstrate that it supports the operations modes and the end-to-end observing workflow. The workflow is shown below. Each rounded rectangle represents a step in the workflow from the perspective of a TMT user. Arrows show how the workflow progresses. Each rounded rectangle contains a number that is used to identify the step in the following discussion. All steps other than step 10, Observation Execution, do not directly involve the telescope itself.
The observatory is scheduled using a time allocation process (TAC) based on 2 semesters/year. A TMT user, as potential Principal Investigator (PI), requests time through his/her partner TAC process using dedicated tools (1). Each partner holds a quantity of TMT observing time, and each partner is responsible for allocating that time to the proposals for which they are responsible using their own TAC process (2). This phase of the workflow is called "Phase 1".
At the conclusion of the proposal process, the partners submit their proposals to TMT (3). The proposals are scheduled by TMT and through an iterative process, a final 6-month long-term semester schedule is produced (4), combining science programs to be executed in visitor, service or eavesdropping modes (see the section on Observing Modes for more information), as well as technical time required for the commissioning of new instruments or systems, and for maintaining the observatory in full operations conditions.
"Phase 2" corresponds to the process of preparing and executing the observations. Regardless of their observing mode (visitor or service), the PIs must use TMT-provided tools to prepare their observations ahead of time with sufficient details for execution by the support astronomer (6). A prototype Phase 2 planning tool for IRIS based on the Gemini Observing Tool is shown below.
In all operations modes, the support astronomer uses a TMT-provided Observation Selection Tool to select an observation for execution and start the process with target acquisition followed by data acquisition. For visitor mode observations, the visiting astronomer selects an observation from his/her own program. For service mode observations, the support astronomer selects the next program for execution using an adaptive scheduling tool, which proposes the highest priority observations to be carried out based on observatory policies.
The Observation Execution step (10) uploads all relevant information from the selected observation and execute the observing sequence to produce the desired science data. The execution sequence can be quite complex, involving the configuration and run-time control of many telescope systems. The products of each sequence execution are science frames and metadata, which are captured by the control system in the way the original observer intended. The Executive Software System (ESW) provides the software in this box.
After execution (10), the science frames are stored in the Data Management System (DMS) local storage system (11). Science frames are also distributed around the observatory to allow for visualization and any processing that may be needed prior to visualization or as part of target acquisition (12). The current construction plan has all science data and associated files pushed from the local torage system to a centralized data center (13). All TMT users would access their data through this centralized science archive (14) rather than the local storage system.