Precise and accurate processor simulation dating

The SimJava Tutorial

precise and accurate processor simulation dating

Intel processors have had a problem with math in the past, too. . Even though I' ve run HPC simulations most of my career, we've seldom of time already knows the entire x86 family (including AMD and Intel) dating back to the . If you use double precision, the error in the argument is about 20, times. Precision workstation's cores, a simulation run time reduction of more than date faster simulation so there wasn't a drain on engineer- ing resources. Almost. notice, the title of the publication, and its date appear, and notice is given that copying is by .. ever seen a full LSE processor model before the experiment. Worse still, the arbiter's exact functionality depends on simulator state that does not.

The method family for setting an entity to be paused consists of the: The use of these methods is identical to the respective process methods except for the fact that time spent in paused state does not count towards the entity's utilisation. More on the entities' utilisation and other statistical measurements will be discussed in a following section. After describing the runtime methods available to entities we can proceed to implement the behaviour of the entities in our example.

The source entity will generate a job for the processor entity every 50 time units until jobs have been generated. The processor will wait for jobs and once one is received it will proceed to process it for 30 time units. Odd jobs will go to the first disk and even ones to the second.

For the first disk this will be 60 time units and for the second one time units. The code for each entity follows: This method should be used by all entities that exhibit a looping behaviour. The source entity uses a for loop since it will only generate events.

Setting up the simulation Now that we have fully defined our entities and their behaviour we can proceed to setup the simulation itself. To define the simulation's main method we will create one further class. The name given to this class should be representative of the system being simulated and also be given to the file containing all the classes.

In this simple simulation four steps are required: Make an instance for each entity. Link the entities' ports. At this point the example simulation could be characterised as being pointless since no output is being produced and no animation is present.

Remember however that this is only a test simulation that will serve as the backbone onto which we will add detail and functionality as the tutorial continues.

The final class for our example follows: The code for the simulation up to this point is available here. In the previous section predicates were mentioned in connection with an entity's runtime methods. We saw that all of the main method families apart from the event scheduling methods, have method variations that make use of predicates. Predicates are conditions that are used to select events. They can be used either to select events already present in the entity's deferred queue, or used to selectively wait for future events that match the given predicate.

All predicates are implemented as classes that need to be instantiated and used in related methods. Three main types of predicates are available: General predicates Event tag or type predicates Source entity predicates Two kinds of general predicates are available to the modeller: The modeller does not have to make new instances of their classes to use these predicates.

Event tag predicates are used to select an event on the basis of its tag. In both cases a predicate object may also be instantiated by specifying a set of tags. In the first case an event will be selected if its tag matches any specified tag, and in the second case it will be selected if it differs from all the tags.

Source entity predicates are very similar to the ones based on event tags. In this case however what is checked is the source entity of the event. Similar variations are available that check any number of source entities as in the event tag predicate classes.

The source entities are specified by their unique id numbers. How to use predicates To use a predicate the modeller needs to make an instance of the desired predicate and pass it to a runtime method.

We have seen such methods for waiting, processing and pausing that have variations for defining predicates. In our example we don't have much need for predicates since each entity receives events from only one other entity.

The uncertainty of the half-life - IOPscience

Furthermore, all the events received by each entity have the same type. Simply for demonstration purposes we will use a predicate in the processor entity to specify that it should only accept events from the source entity: What is the trace of a simulation?

Simulations are often quite complex programs. Whenever complexity is added to any program the number of errors present is bound to increase. After building a simulation it is always good practice to test it before fully instrumenting it with statistical measures and exhaustively running it.

A tool that is very useful in a simulation's debugging process also known as verification is the simulation's trace. The trace is essentially internal information that is made available to the modeller through which the exact actions within the simulation can be examined. Such an examination could lead to the discovery of undesired entity behaviour which would require the simulation's modification. Three types of trace are provided: Whenever an event is scheduled, received, cancelled etc.

Furthermore, trace is generated when an entity starts to exhibit some behaviour e. The entity trace is produced by the modeller by modifying the entities to produce trace output. This method is passed an integer that is used as a bit mask for checking the level of each entity trace statement.

If the result of system-level AND message-level is not 0 then the message is added to the trace file. The event specific trace is identical, in content, to the default trace. The difference here is that trace will only be produced for events that have been specified to be of interest. The purpose of this is to allow the modeller to focus on specific event types and not be overwhelmed with the amount of the entire default trace.

The first flag of this method specifies whether or not to add the default trace. The second one concerns similarly the entity trace and the third one the event specific trace. It should be obvious that the flag for adding or not the event specific trace is meaningful only if the default trace is switched off. If none of these methods are used the default is not to produce any trace. This is the case because in cases of large experiments the trace file can grow to extremely large sizes and slow down the simulation.

In our example we are going to add entity trace through the processor. At this point we should note that tracing takes a completely different meaning when animation is involved. The entity trace is used in this case to update animation parameters and to display event schedulings. Why is good sampling important? In the simulation we have built up to this point we have used delays for the pausing and processing of entities.

Algorithm - Wikipedia

The delay for each entity was a double value that was explicitly set by the modeller when each entity was instantiated. In other words the delays in the simulation were deterministic. It is almost impossible in real life to capture an entity's behaviour with deterministic values. It is more suitable to describe an entity's performance characteristics by means of certain distributions; sampling them to produce specific values for e. By using distributions the real world is more accurately simulated since the entities' behaviour will be, as in the real world, stochastic.

A good example of the importance of non-determinism is a network router whose packet interarrival times are far from deterministic. In order to generate samples from a distribution a random number generator needs to be used.

The uncertainty of the half-life

This generator is sampled to produce a sample uniformly distributed between 0 and 1. Following this, the uniform sample is modified to fit in the desired distribution.

precise and accurate processor simulation dating

This process is called random variate generation. From a metrological point of view it is obvious that instruments, electronics, geometry and background may vary due to external influences such as temperature, pressure, humidity and natural or man-made sources of radioactivity.

Therein lies the problem with half-life measurements: Consequently, nuclear data evaluators are frequently confronted with the problem of deriving a recommended value for half-lives from a discrepant set of data. Evaluations show that, for the majority of the radionuclides, the spread of experimentally determined half-life values is larger than expected from the claimed accuracies [ 16 ].

The published data not being completely reliable, one has to use alternative methods to obtain a mean and associated uncertainty value. The situation is often aggravated by experimenters providing insufficient detail on how the half-life and its uncertainty were determined. A more comprehensive reporting style, which provides traceability for all major aspects of the measurement that may influence the result and tools that allows assessing the quality of the data, is recommended [ 15 ].

With time, one can expect evaluators to disregard published decay data lacking a sufficient level of traceability when a growing number of well-documented experiments become available in the literature. It is of interest to the user community that the quality of decay data, and half-lives in particular, be improved [ 1920 ]. Whether the applications are situated in the field of nuclear medicine, power generation, nuclear forensics, radioactive waste management, analytical techniques, astrophysics, geochronology, basic nuclear research or detector calibration using reference sources, there is generally a half-life correction factor involved for rescaling measured activities to a reference time.

It discusses the propagation of the uncertainty of the half-life in activity measurements and the difficulties with providing an uncertainty budget when measuring half-lives. Applications of half-lives 2. Rescaling of activity As the activity in a radioactive sample changes with time, it is appropriate to associate a measured activity with a reference time t0, which does not necessarily coincide with the start or stop time of measurement t1, t2.

Routine laboratories calibrate their activity detectors by means of secondary standards; calibrated sources with traceable activities of certain radionuclides. These standards are kept and reused for a practical period of time.

Analog computer

Consider for example 10 year-old 22Na 2. The standard uncertainty on the calibration factor due to uncertainty on the half-life is then respectively 0.

Larger errors can occur for radionuclides with shorter or less reliably known half-life. They typically contain a saturation factor for the activation during a period tirr, a decay factor and a counting factor C as above. One way of age dating is based on known atomic concentrations P t of the parent nuclide at different times, via A typical example is 14C dating 30 a.

Isochron dating methods use the concentration S of a stable isotope of the daughter element as a reference. Ratios are used instead of absolute concentrations because they are conveniently measured with mass spectrometers.

Another application is dating 'young' groundwater, i. The isochron method can also be applied in uranium-lead dating, using Pb as the non-radiogenic isotope. The initial Pb isotopic ratios extracted from meteorites and age of the system are the two factors determining the present day Pb isotopic ratios. The age of Earth, 4. For several applications, this has become the bottleneck [ 25 ]. Another dating technique, Pb sediment radiochronology [ 26 ], is frequently applied to reconstruct past environmental conditions of ecosystems.

Due to various transport processes, e. The amount of excess Pb in various layers of a sediment core is an indicator of the accumulation rate. Nuclear forensics Radioactive disequilibrium between parent and daughter arises when both are separated by physicochemical processes, e.

Following this separation, the daughter starts to grow in again. Information on the time of separation may be obtained from measuring the ratio of parent to daughter atoms. This property is applied in nuclear forensics, aiming at fingerprinting of nuclear materials [ 232728 ]. The relative amounts of parent and decay products can be used to identify the age and source of the material. Nuclides of interest are actinides Th, U, Np, Pu, Am with potential applications in improvised nuclear devices or other nuclides that can be applied in radiological dispersion devices Co, Sr, Cs.

The time scale of interest is in the 1—50 year range, which is much shorter than in geology. Age dating of U is more difficult than Pu dating, because their long half-lives lead to minute amounts of ingrowing daughter nuclides. Besides the quality of the separation, the current uncertainties on the relevant half-lives are a major limiting factor on the attainable accuracy.

Consequently, age determinations based on atom ratios are more sensitive to the parent half-life than to the daughter half-life.

Consequently, age determinations based on activity ratios are more sensitive to the daughter half-life than to the parent half-life, which is the opposite of the effect noticed for atom ratios equation Activity ratio measurements enhance the signal of the short-lived nuclide, and may be a good alternative to atom ratio measurements in cases where the atom concentration of the short-lived nuclide is particularly low. The relative uncertainty of the half-lives of the parent via R or the daughter nuclide via RA is of equal importance as the relative uncertainty of the measured ratio R or RA.

Dating of a nuclear event The principles of radiometric dating can also be applied to fission products created in a nuclear explosion.

Accuracy and Precision for Data Collection

Radionuclides may attach to aerosols or be released as noble gases and get collected in air samplers at a remote location. One can distinguish isobaric and non-isobaric clocks [ 30 ]. Non-isobaric clocks start from theoretical cumulative fission yields for the calculation of initial activity ratios and use the current activity ratio of fission products with different half-lives to estimate the time elapsed since the nuclear event.

Isobaric clocks are based on parent—daughter pairs of which the daughter nuclide is not directly produced in the fission reaction. For example, they are directly applicable to the BaLa clock [ 31 ], based on progeny of the short-lived noble gas Xe The aerosol-bound 95ZrNb chronometer is another important clock that requires a more elaborate mathematical treatment due the presence of meta-stable 95mNb in the decay scheme [ 32 ].

Starting from an initial activity A, the accumulated dose is proportional to the number of atoms that decay in the body: The relative uncertainty of the effective half-life propagates linearly to the dose. The biological half-life cannot be determined as precisely as the physical half-life, and the dominance of either rate differs from one radionuclide to another.

For many long-lived nuclides, such as 3H, 14C, 22Na, 36Cl, 60Co, Cs and U, the much shorter biological half-life 10—70 d is dominant [ 34 ]. The latter are the only cases in which the dose calculation depends significantly on the accuracy of the physical half-life. The bone seekers 90Sr, Ra and Pu have extremely long effective half-lives 18— a [ 34 ]which reduces their uncertainty propagation towards the accumulated dose during a person's lifetime.

Measurement of short half-lives Half-lives of excited nuclear states, giving access to transition probabilities, provide direct insight into the structure of the nucleus and offer one of the most stringent tests of nuclear models.

Different measurement techniques are applied to cover half-life ranges between picoseconds and seconds. Picoseconds—nanoseconds Short lifetimes of excited nuclear levels ranging between about 1 ps to several ns have been measured by the Recoil Distance Doppler-Shift RDDS method [ 3536 ] for a large variety of nuclei all over the nuclear chart.

Modern graphics processing units GPUs are often wide SIMD implementations, capable of branches, loads, and stores on or bits at a time.

precise and accurate processor simulation dating

Software[ edit ] The ordinary tripling of four 8-bit numbers. This process is repeated for each number. The SIMD tripling of four 8-bit numbers.

In theory, the speed can be multiplied by 4. Some systems also include permute functions that re-pack elements inside vectors, making them particularly useful for data processing and compression. They are also used in cryptography.

Adoption of SIMD systems in personal computer software was at first slow, due to a number of problems.

One was that many of the early SIMD instruction sets tended to slow overall performance of the system due to the re-use of existing floating point registers. Compilers also often lacked support, requiring programmers to resort to assembly language coding.