Data Presentation and Graphical User Interface
Time and Frequency Domain
Time-Frequency Analysis
Lissajous Graphs
Histogram
Filtering
Template Matching
Principal and Independent Component Analysis
Heuristics
Signal Specific Processing
EEG – frequency bands, evoked responses
ECG – IBI, BPM, HRV
EOG – Derivative
EMG – Median/Mean Frequencies
EGG – Propagation
Non-Invasive Cardiac Output
Blood Flow
Blood Pressure
Air Pressure and Volume
AirFlow – Integration, Cycle Reset
Nerve Conduction Velocity
Auditory Brainstem Response
Visual Evoked Response
Cognitive Responses (P100, P300, P600)
Data Presentation and Graphical User Interface
The primary goal of signal processing methods is to enable researchers to find meaningful patterns in data. Physiological data is first evaluated, after simple amplification, by visual examination. People are granted exceptional visual pattern identification abilities by virtue of the eyes and brain. Signal processing methods should be considered in tandem with the output data’s visual presentation in order to provide the most effective analysis tools for researchers.
A researcher will begin to understand a phenomenon as it first impacts the senses. After this first encounter, the researcher may decide that a different view is required for additional insight. In people and animals, this behavior is sometimes noticed as a change in orientation of the subject’s head, when the subject is observing an interesting or unusual phenomenon. Head-tilting can result in different visual and auditory frames, thus allowing for differing perspectives. These different perspectives can aid the subject to better understand the phenomenon.
Similarly, signal processing methods used in conjunction with data presentation methods should best allow for a wide variety of options when processing data to emphasize hidden attributes and to allow for different views. Consider the case of any three dimensional object undergoing inspection. If just allowed three different positions to view the object, the optimal views would be in the X (length), Y (width) and Z (height) directions. With these three views, the exterior of the object would be very clearly identified. The X, Y and Z directions provide an orthogonal view from the other. Orthogonal vectors are fully independent from one another. Data profiles, seen along one axis, provide no clue as to the views in other axes. All three perspectives are needed to fully grasp the nature of the object.
Time and Frequency Domain
The time and frequency domains are different dimensions. These domains afford orthogonal perspectives of data. The time domain is familiar, because we see its evidence so clearly in the world around us. As an example, the trail left by an animal is intuitively related to a time-domain view of the animal’s movement because we can mentally substitute time for distance traveled along a path. A meandering trail in a particular direction can easily be recognized as a meandering trail as a function of time. When we see a bird flap it’s wings, we notice that, over time, the wings are in different positions.
The frequency domain is similarly familiar, but it’s manifestation is perhaps clearer to the ears versus the eyes. When we hear a boulder tumbling down a hill we can estimate its size by the sound it makes. A bigger boulder makes louder and lower sounds than a small one. When hearing bird calls, we can discriminate one species from another by listening to the pitch and the warble.
When sight and sound come together, to describe the world around us, the world becomes increasingly clear. And, of course, more so as additional senses are included. Sight, sound, touch, taste and smell provide differing and important perspectives on what we experience.
Two basic signal processing methods for physiological data are:
1. Filtering of time domain based waveforms. Filters allow for the suppression of unwanted frequencies and the emphasis of desired frequencies. An ability to filter data provides a tremendous ability to discern patterns. Filters can be tuned to accept the energies of specific data frequencies to better isolate and identify that data.
2. The Fourier Transform. The Fourier Transform allows for the conversion of a time domain based waveform to a frequency domain based waveform. The Fourier Transform is a signal processing method, when applied to time-series data, that presents a orthogonal view of the original time domain based waveform. The Fourier Transform provides a view of the data through a different dimensional portal, one that’s based in the frequency domain versus the source time domain. When the Fourier Transform is applied to time-series data, the output data becomes a representation of the frequency components evident in the source data. The display vectors of the Fourier Transform’s output are amplitude (vertical) versus frequency (horizontal). The display vectors of the source time-series data are amplitude versus time. The Fourier Transform is based on the mathematical theory that any length of time-series data can be mimicked by a the summation of a possibly infinite series of sine waves that may have varying amplitude and phase. The Fourier Transform is an orthogonal bilinear transformation. The Fourier Transform converts time-series data to the frequency domain and the Inverse Fourier Transform converts frequency-series data to the time domain.
Two basic signal presentation tools for physiological data are:
1. Graphing of time domain based waveforms. This tool is a visual plot of the waveform amplitude as a function of time.
2. Graphing of frequency domain based waveforms. This tool is a visual plot of the frequencies present in a selected portion of a time domain based waveform. This graph will show the relative amplitudes of the frequencies, in a portion of time-series data, as a function of frequency.
These signal processing and graphing tools are phenomenally important. If able to be employed highly interactively, subject to rapid researcher control, these tools permit rapid evaluation and understanding of data. Orthogonal views of the data are provided by the juxtaposition of the time and frequency domain viewing methods. Discrimination of data, via emphasizing some aspects and suppressing others, is provided by the use of filtering. Use of filtering methods and the Fourier Transformation are highly complimentary. The Fourier Transformation provides a view of the frequency amplitude and phase components in the data. Filters can then be designed to emphasize the desired components and attenuate the interfering components. After time-series data is filtered, it can be examined again, by the Fourier Transformation to see the new formulation.
Time-Frequency Analysis
The Fourier Transform provides detail as to what frequency (spectral) components exist in the processed signal. However, this transformation operates under the assumption that the processed signal is stationary. Stationary means that the data is unchanging. If a specific section of times-series wave data is processed by the Fourier Transform, the data is assumed to repeat identically for all times before and after the selected section.
Physiological data is not stationary. Accordingly, the use of the Fourier transform has limited and very specific applicability for providing frequency-based transformations of time-based, measured, physiological data. The Fourier Transformation does not provide any information about when a specific spectral component occurs but just that the spectral component is present in the, assumed infinitely long, time-series source data.
When time localization of processed frequency components are needed, a time-frequency transform of the signal is required. The short-term Fourier Transform and Wavelet Transform are types of time-frequency transforms. Physiological data often contains particular spectral components that occur at non-periodic intervals, such as those measurements associated with stimulus and response. When performing detailed analysis, it can be helpful to know the time intervals between these particular spectral components.
The Wavelet Transformation was developed as an alternative to the short-term Fourier Transform.
Lissajous Graphs
Lissajous curves are the family of curves described by the equations:
X(t) = a sin (wt+c)
Y(t) = b sin (t)
They are sometimes known as Bowditch curves after Nathaniel Bowditch, who studied them in 1815. The characteristics of these curves were also investigated by Jules-Antoine Lissajous circa 1857. More broadly characterized, Lissajous-type curves can be created by the graphing of one type of waveform versus another. The visual presentation, when the ratio of the waveform frequencies is a rational number, has several important applications in physiological measurement. The Pressure-Volume (PV) loop is a significant example, as it has uses in evaluating cardiac and respiratory function. In both of these applications, pressure and volume are simultaneously measured and graphed. The pressure range constitutes the x-axis and volume range is defined on the y-axis. The resulting PV loop picture describes a great deal about the relationship between these coupled variables.
Another useful application area of Lissajous-type curves relates to the presentation of a phase space. A phase space is one where all the states of a system can be represented. The concept of phase space was developed in the late 19th century by Ludwig Boltzmann, Henri Poincar√© and Willard Gibbs. A phase space graph incorporates every aspect of a system’s behavior.
To create a phase space graph for a single time-domain based waveform, the data is graphed versus a delayed version of itself. This type of graph is called a phase plane. It can be very useful to examine periodic signals sourced from physiological systems like the heart or lungs. The heart has been modeled as a chaotic oscillating system and phase plane plots of the ECG can be used to identify the range and depth of such chaotic behavior.
Histogram
A histogram is a graph of a the number of times a variable is measured to be within a specific series of ranges. In particular, a histogram illustrates the distribution of data. If the data has a normal (Gaussian) distribution, then the histogram of that data has a symmetrical bell-shaped curve. The highest point on the curve are the values of data most likely to occur. The tail-ends of the histogram represent the data values least likely to occur.
Histograms are useful for examining the statistical nature of the signal being measured. The shape of the histogram immediately identifies normal or abnormal distributions and can point to desired or interfering components.
Filtering
Filtering is a core signal processing function. Filtering is the act of discrimination between one type of data and another. In the case of physiological signal processing, filters are employed to attenuate undesired signal frequencies and emphasize others. There are a few basic filter types and many methods available to implement those types. Commonly used filters are: Lowpass, Highpass and Bandpass. In the case of a Lowpass filter (LPF), signal frequencies lower than the filter cutoff frequency are emphasize versus those higher than the cutoff frequency. For a Highpass filter (HPF), the signal frequencies higher than the filter cutoff frequency are emphasize versus those lower than the cutoff frequency. A Bandpass filter (BPF) has two cutoff frequencies, a low and high cutoff. In this case, signal frequencies between the filter cutoff frequencies are emphasize versus those lower then the low cutoff frequency and higher than the high cutoff frequency. All types of filters can be crafted as a combination of Lowpass, Highpass and Bandpass filters. For example, a comb filter can be constructed from a cascaded series of Bandpass filters.
Filters can be implemented as analog or digital. Analog filters are physically realized using electrical components, such as resistors, capacitors, inductors, delay lines and operational amplifiers. Digital filters are implemented as an algorithm, which may reside in a computer, embedded microprocessor or digital signal processor. Digital filters require the signal data to be digitized, prior to processing. An analog to digital converter is used to convert time-varying signal data into a sequential string of proportional numbers. Digital filters process this input string of numbers to generate an output string of numbers. This output string of numbers can then be converted back into a time-varying signal through the use of a digital to analog converter.
Analog filters can be reasonably well-emulated by the combination A/D converter > Digital Filter > D/A converter. In the case of physiological measurement, it’s most common to employ analog to digital converters to direct the signal data straight to the computer. Once in computer memory, digital filters can be applied to the data to extract the signals of interest.
There are two popular algorithms used to implement digital filters, Finite Impulse Response (FIR) and Infinite Impulse Response (IIR). FIR filters don’t use feedback and IIR filters do use feedback. Feedback in an IIR filter means that some portion of the filter’s output is introduced back to the filter’s input. IIR filters incorporate delay elements, multipliers and a summing junction. Assuming a perfect IIR filter, the recursive process implies that any input signal to the filter will have a residual influence on the output of the filter for an arbitrary period of time. This is why feedback or recursive filters are called infinite impulse response filters. An impulse applied to an IIR filter will produce a response that lasts for eternity. In practice, however, the impulse response of IIR filters soon drop below a value range which can be recognized or produced by the filter, so the effective impulse response is finite in length.
An FIR filter does not incorporate feedback or recursion into its design. This type of filter pushes the signal data through the filter in just one direction. In order to make an effective FIR filter, typically many filters stages are needed. A FIR filter stage consists of a delay portion combined with a multiplier portion. A 101 stage FIR filter will have 101 sequential delay stages and 101 multiplier stages. The multiplier stages sample the data present after each delay stage. The multiplier stage outputs are all summed together to create the FIR filter output.
Because IIR filters are recursive and FIR filters are non-recursive, they differ in important ways. If not properly designed, IIR filters can be unstable. Instead, no matter how an FIR filter is designed, it’s inherently stable. IIR filters do not have perfectly linear phase, even if they can be designed to have reasonably linear phase in regions of interest. FIR filters can have perfectly linear phase. Linear phase means that the filter will delay all frequencies by the same time period. This time delay is known as group delay. Linear phase filters can faithfully mimic the essential shape of complex input waveforms, whereas non-linear phase filters may introduce certain distortions to the shape of an input waveform. Because of similarities in topology, certain types of IIR filters are well-suited to emulate certain analog filters. Because analog filter design methods are very advanced and well-documented, this topological similarity allows analog design methods to be transposed to the digital domain. For example, common analog or digital IIR filter can both implement a transfer function defined by a ratio of biquadric functions. These filters are known as biquads or second order filters. When designing a biquad IIR filter, the biquad analog transfer function is converted to digital form via a substitution process called the bilinear transformation.
Template Matching
Template matching is concerned with the identification of patterns in the data. In a simple case, the template looks just like the data you wish to find. The template is swept along the data sequence until a similar looking pattern is encountered. At the point of closest matchup, the output of the template matching function produces a coincident high or low value indicating the similarity of the match. The mathematical functions of correlation, convolution and mean square error can be used as the basis for template matching algorithms.
Principal and Independent Component Analysis
Signal processing methods, such as Principal Component Analysis (PCA) and Independent Component Analysis (ICA) can be employed on a variety of physiologically-source measurements to isolate signals of interest. PCA and ICA are useful signal processing methods for estimating the nature of individual component waves, that have superimposed, such as in the context of biopotential measurements. Because the body is a volume conductor, internal biopotential signal sources can sum together to create a layered and complex signal at points of measurement. Specific strategies can be used to distill these layered signals into constituent sources. These strategies are known as “blind source-signal (BSS) separation methods”. BSS implies that the axes of projection (constituent sources) are determined through the application of some internal measure and without the use of any a priori knowledge of data structures. These techniques, and associated evolutions, can be be employed on one to some number of channels of physiological data to isolate data components. Both ICA and PCA are feature extraction techniques used for dimensionality reduction. ICA employs non-Gaussian aspects of the wave data, using the third moments and above, to generate the component waves. PCA employs Gaussian aspects of the wave data, using the first and second moments, to generate component waves. In a multidimensional space, ICA identifies the component waves that maximize their statistical independence and PCA identifies the component waves that maximize their variance.
Heuristics
Heuristics is the trial-and-error use of signal processing methods, based on inferences gained, when examining data. Human pattern recognition ability is exceptionally good when sensory-compatible data is perceived. When complex data sets are visualized, people can spot patterns not easily isolated by computer algorithms. When observing a new data set, the optimal strategy for discerning patterns usually employs a skilled researcher employing an array of software tools on the data. The researcher will utilize tools, to emphasize one or more aspects of the data, in accordance to the researcher’s whim. This process involves a repeating sequence of speculation, searching and evaluation.
Signal Specific Processing
When analyzing physiological data, to locate and isolate phenomena of interest, a wide range of mathematical tools are useful. In particular, tools to identify aspects of periodicity are important because physiological systems exhibit cyclic behavior. Cornerstone examples include the heartbeat and respiration. Examination of the periodic nature of the electrocardiogram (ECG) is required for determination of a wide range of diagnoses, collectively known as “arrhythmia” meaning lack of rhythm or unrhythmical, from the Greek “arrhythmos”. Tachycardia is a fast rhythm; bradycardia is a slow rhythm. Premature atrial or ventricular beats are extra contractions in an otherwise normal rhythm.
Even the rhythmic activity of the heart rate, known as heart rate variability (HRV), has proven to be of considerable research interest. HRV methods examine the cyclic nature of the changes in heart rate over time. HRV analysis compares the power, of the cyclic heart rate energy, in one frequency band to another.
The neuronal activity of the brain is characterized by classifying the rhythmic behavior of the electroencephalogram (EEG) into a range of frequency bands. These bands are:
Delta 0.5-4Hz
Theta 4-8Hz
Alpha 8-13Hz
Beta 13-30Hz
Gamma 36-44Hz
The relative energy levels of these EEG frequency bands are indicative of mental state. For example, it has been shown that EEG alpha activity reflects attentional demands and beta activity reflects both emotional and cognitive processes. [1]
In the case of the electromyogram (EMG), signal processing methods are used to establish the mean and median frequencies associated with the collective motor unit action potentials. These frequency characteristics are useful when making determinations as to muscle fatigue and force. [2]
Even though cyclic signal processing methods are important when analyzing physiological data, a host of additional methodologies remain relevant.
Stimulus-Response:
Critical aspects of physiological measurement are related to the idea of stimulus and response. In these situations, choice of stimulus is important, as the nature of response can greatly vary. Physiological systems are usually characterized by a variety of non-linear behaviors, so small changes in stimulus can have huge impacts on the nature of response. In many stimulus-response tests, the time period between the onset of stimulus and the beginning of the response is an important factor.
Differentation-Integration
These kinds of methods are useful when converting between physical variables, such as power and energy. The integration of power (P) over a specific time period (T) is Energy (E). Alternatively, the differentiation of energy over time indicates the power at any point in time.
E (joules) = P (watts) * T (seconds)
Another example involves the conversion of flow to volume. The integration of flow (F) over a specific time period (T) is volume (V). Alternatively, the differentiation of volume over time indicates the flow at any point in time.
V (liters) = F (liters/second) * T (seconds)
Differentiation is a very useful signal processing method in that its output emphasizes the rate of change of the variable being differentiated.
Following is a tabulated listing of various signal processing methodologies:
ECG: Interbeat (R-R) Interval, Identification of P, Q, R, S, T timing, recognition of aberrant behavior – such as tachycardia, arrhythmia and premature ventricular contractions, high frequencies noted on trailing edge of R-wave, heart rate variability
EEG: Energy in frequency bands – Delta, Theta, Alpha, Beta, Gamma, evoked response measurements – AEP, VEP and SEP, P100, P300, P600, ABR, identification of alpha spindles, seizure recognition
EMG: Mean and median frequencies, motor action potential identification, evoked responses – H reflex, nerve conduction velocity, root mean square measurements, spike counting
EOG: position and velocity determinations, frequency analysis, micro-saccadic identification, nystagmus recognition
EGG: propagated signals, identification of bradygastria, normogastria and tachygastria
Non-invasive Cardiac Output: identification of Q, R, B, C, X intervals, stroke volume determination
Blood Flow: integration to volume, beat to beat measurement, cardiac output, vascular resistance
Blood Pressure: establishment of pressure-volume loops, calculation of diastolic, mean and systolic pressure, beat to beat intervals
Air Pressure and Volume: measurement of pressure-volume loops
Air Flow: Flow Integration to volume, Forced expiratory volumes – FEV1, FEV2, FEV3, tidal volumes
Nerve Conduction Velocity: measurement of the speed of nervous system signals
Auditory Brainstem Response: sound stimulus response and averaging test to measure hearing ability
Visual Evoked Response: optical stimulus response and averaging test to measure visual acuity
Cognitive Responses (P100, P300, P600): stimulus response and averaging test that indicates states of cognition