Post Job Free
Sign in

Mathematician

Location:
Coburg, OR
Posted:
February 25, 2026

Contact this candidate

Resume:

RESUME

Michael Brooks

I am interested in any position. Most employers view me as an engineer, but I am rusty with everything but C and C++. I attended the Sun Java school in Chicago and graduated, but have never used that skill. I have several dozen patents most involving software, but most were written in Assembly language, C or C++. Where I excel, which you actually need, is in writing and implementing algorithms. I am a Mathematician, who disliked teaching college. I earned a degree in electrical engineering in 1979 and worked as a EE for most of my career, except for two brief stints teaching Mathematics, and work as Data WareHouse and Database engineer. I am a certified Microsoft DBA and Developer. I still do beta testing for Microsoft. I am currently a range safety officer at the McGowen Creek arms range for police officers. I invented the OTDR and am “on call” by various agencies trying to figure out how optical fiber cables/networks stopped working, if there are hacks/intrusions, etc., how to transmit quantum data over glass fiber networks. I’m even an RF expert with a current radio license.

I have other patents than the ones listed. You can find those, along with fly tying and robotics videos of me on the Internet. Note, I am 78 and not as physically active as I was 20 years ago. Mentally, I haven’t slowed down at all. I enjoy writing assembly language code and porting Linux onto various platforms as a hobby. I have the touch screen working on an Ubuntu port to the new Lenovo Ultra 7 based laptop. All that said, I am willing to work as a greater or telephone technical adviser. I have an adult child with bipolar disorder that I provide for. He can hold a job down because he hallucinates invisible people doing funny things that cause him to laugh outloud at awkward times. He isn’t violent. He us, in fact, very sweet and quite intelligent. He is incapable of surviving without help from his family and, right now, that is me. I tore my meniscus in the left knee and that made teaching fly casting or robotics to very active middle school children impossible until I get surgery on that knee that I cannot afford without a job. So, I will do pretty much any job that temporarily considers my disability and allows me to provide fir and care fir my child. Spectra Physics:

*AI/Expert Systems, written in SmallTalk

*invented the scanner scale

* invented the DAT chip

*invented the Freedom scanner. Interestingly, I wrote the software for that in assembly language for the first generation ARM processors (Strong Arm)

* patents US5311000, US5440110, US4879456, US4963719, US5198649 Intermec/Homeywell/United Barcode Industries (Paris France, Gothenburg Sweden)

* Wrote software for Intermec printers

* Designed and wrote the Linux/Unix based version control system used by Honeywell

& UBI (I can provide the code and manual I wrote if you like)

* Designed a method for updating software for our printers. Prior to 1994, technicians went on site and replaced the ROMs in printers to update/upgrade/bug-fix equipment. I developed the method and software for doing this. I expanded upon this and it ended up as a global patent for updates over the internet.

*Patents: US6618162 (APPARATUS AND METHOD TO CONFIGURE A DEVICE, SUCH AS A PRINTER, OVER A NETWORK. US6618162. After the the popularization of the internet, that was enlarged upon to cover all updates over the internet. S6618162A1, US6618162B1, etc. That led to the global patent series W02000043863A1 - W0200043863A9, etc. I literally invented much of what you call

“the internet”.

Tektronix

* Invented the OTDR (Optical Time Domain Reflectometer) . This was, I believe still is, classified because it shows how to intrude into fiber and wire networks. Optical fiber networks are used to transmit quantum bits, so this invention will become even more important. I am attaching the white paper and code because I foresaw the use of the algorithms (note the new algorithm for calculating slopes) for stealth and quantum technologies. I also participated in the development of active stealth technologies, but I have no idea if I am allowed to talk about that or not. Oregon

*Data Warehouse Administrator. Terminated for whistleblowing. The state was selling HIV/AIDS and other test results to Oregon contract insurers (Oregon Health Plan). They, in turn were selling those results to other insurers, doctors, private investigators, and date warehouse, along with other laboratory results, prescription drug data, for Oregon and 32 other states. Data brokers were (likely still are) used that information to determine credit worthiness and interest rates on loans. My whistleblowing was largely responsible for Medicare reform and including the FTC in medical records breach notifications (ongoing).

Self Employed

* I have been working on a nonintrusive blood chemistry test using three tuned laser diodes. This is based on the idea behind the oximeter. The problem for using it to, say, measure blood glucose, is that glucose shows up in three distinct absorption frequencies while oxygen appears as only a single one. Blood oxygen can be measured with a single fixed frequency infrared laser and a red diode. By using three lasers, I calculated that I can measure Co2, calculate kidney and liver function, measure Vitamin D, uric acid, etc. The idea was to be able to conduct an accurate blood test in a doctors office (in a jungle, third world city, war zone, in a space ship, or on the surface of Mars) with immediate results. I dropped this because I ran out of money and didn't not bother to look for venture capital because of my experience as a DBA for Oregon and corrupt insurance conglomerates and their investors. Eugene Parks an Recreation

* I taught three different robotics classes for adults, children and TAG high school students.

* I also taught fly tying classes and fly casting classes. I led led fly fishing outings. I was forced o drop this because of a knee injury (lateral tear of the left knee meniscus) that has remained untreated because I am on Medicare and the Oregon Medicare Advantage insurers were saving money by not treating injuries for whistleblowers and other defenseless patents.

* Tai Kwon Do instructor. I have black belts in TaeKwonDo and Hapkido. I am very interested in finding part time, remote work. I am the primary care giver for an adult disabled son with BiPolar Disorder with schizophrenic tendencies (he sees things, mostly people and devils, and laughs hysterically). He is being treated with lithium and Geodon. My only retirement is $80 a month from Honeywell and Social Security. That work at Eugene Parks and Recreation was used to feed and care for my son and myself. I need to work and am willing to start right at the bottom. Be aware that I make every company I work for more profitable. Given time to learn what you are doing, I WILL see ways of making thing things easier, better and more profitable. Starting part time at an entry level position allows me to learn about your systems at low cost to you and allows me to prove myself and my value as an employee, while taking care of my family obligations. You wont regret taking a chance on me. Michael Brooks

32713 Vintage Way

Coburg, Oregon 97408

Spice ODTR Algorithm

INTRODUCTION

This document outlines the algorithms used on the Spice OTDR. These algorithms were developed by Mike Brooks and Bill Trent and should be considered “confidential” as they are under consideration by Tektronix for patent. These, separately and/or as a group include: the linear-to-dB transformation, the “sliding slope/y-intercept/etc.” methods, the event sieves A1, A2 and A3, and the event refine sieves, the concept of using sieves to identify events and other anomalies, the noise characterization schemes, and the inner-quartile coefficients and various support functions.

The data used for testing the described methods is from a variety of sources, including synthesized waveforms and actual data from Raven and the 3031. The events on these waveforms have included discrete non-reflective,

“gainers” and reflective events, and grouped events composed of variations of all three of these basic types. All of the events tested covered a wide variety of shapes and losses in an attempt to break the algorithms; especially the sieves. The results of these tests demonstrate that these algorithms work far better than those used on the 3030/31

(the original algorithms suggested for porting to Spice). For example, the Spice algorithm suite finds all of the events (with no false events) on the fantasy fiber and marketing fiber waveforms whereas the 3031 suite finds only about 60% of those events.

Our approach to finding events differs radically from previous methods of identifying events. Prior event marking schemes separately identified, “proved”, marked, and measured unique events on the waveform. Such an approach, while usually successful, is time consuming and difficult to tune to unique hardware. Our approach uses statistics and “fuzzy logic” to isolate areas where events are likely to occur; as such it is nothing more than a set of complex filters. Under this approach, rather than speak of an anomaly on the fiber as being positively an event, one should say that the anomaly is an event with some degree of certainty. In our case, we will only mark anomalies as events if we are within a 99% certainty. In the future, this criterion could be relaxed or tightened, or we could have two or more classifications for events: 99% certainty, 90% certainty, 85% certainty, etc. Sieves A1 and A2 alone always detect all real events and mark less than 1% false events. Sieve A3 reduces this by a factor of 10; both reducing the number of false events and focuses in on real events . The two refine functions, NRrefine and RRrefine, are just additional sieves that could just as well have been named sieves A4 and A5. NRrefine and RRrefine further filter the data and isolate events with a degree of probability bordering on certainty.

A last sieve (that the marked events will need to go through) is a detector for ghosts and echo’s. It is expected existing 3030/3031 code to detect these will be ported to Spice unchanged. 1. CONVERTING LINEAR DATA TO BELLCORE FORMAT

Function: lin2dB One of our goals with Spice is to work with and store waveform files in the “Bellcore format” to simplify loss calculations and display layout. The Bellcore format specifies that 1000 counts is equal to one decibel (1 count = 0.001 dB). We determined that, at the outset, we wished to convert all of our linear data values to this standard and work with these values directly. The formula for converting linear data to decibel data is simply: 5log*(N – Z)/(F-Z)

Where: N is the linear data value, Z is the zero offset (or baseline) for the data set, and F is the full scale (or maximum) value for the data set.

Scaling the result of this calculation by 1000 and removing any fractional component left over yields a result in accordance with the Bellcore format.

In theory, given the dynamic range of a system and knowledge of its hardware, we can either calculate the zero offset and full scale values for the system or can pass these values in with the linear data set to be transformed. This works fine for debugging and testing, but in actual practice a number of variable effect the zero offset (temperature, number of averages, DC offset, etc.) and it is much better to have the hardware subsystem calculate and pass this value in to a function handling the data transformation. We note that this method of transforming data from linear values to Bellcore format has a problem in handling data as it approaches or is less than the zero offset. As the linear values approach the zero offset value the transformed data values get smaller and smaller until, with N = 0, the transformed value is an infinitely large negative number. For values less than zero, a transformation would yield an imaginary number. Thus, for our purposes, we have set an arbitrary limit established by the full scale value; that value being exactly 5log*1/(F-Z) (the same as N-Z = 1) for all values less and or equal to the zero offset. Also, note that as N approaches the zero offset, very small changes in the linear value will cause drastic changes in the transformed data. These wild fluctuation are not a part of the real waveform data. Rather they are an artifact introduced in our transformation of the data from linear to log domain. Nonetheless, the temptation to use a small zero offset value to lower the point at which the transformed data fluctuates is to be avoided. An incorrect zero offset will cause the data to be non-linear as it approaches the noise floor. The degree of non-linearity and the severity of that effect is directly dependent on the accuracy of zero offset value (and, to a lesser extent, the full scale value). Accurate transformations depend upon an accurate calculation of the zero offset value.

To avoid the need for floating point arithmetic and to speed up calculations we reformulated the problem using the fact that log(M/N) = logM – logN and by scaling the log values by 1000. This is accomplished in the function lin2dB . The actual conversion to log value is handled by a scaled lookup table called by the function lin2log from within lin2dB . lin2dB also calculates the standard deviation of the noise from the laser off information. A description of lin2dB is as follows:

Given any linear value to converted to it’s log equivalent, that value is compared against a scaled (by 1000) table of log10 values. If the number is larger than 1/1000th the largest value in that table (also equal to the table size), then we shift the linear value one bit position to the right and re-compare that number against the log table size. This process continues until the number is less than the log table size, the number of shifts being remembered. Then, the log10 value of the resultant linear value is looked up directly in the log table. The number of shifts is looked up in a table of log10 values of the powers of two, 2N, where N is the number of shifts. The two resultant numbers are added together to convert any number possible to handle by SPICE hardware (roughly 10.0E15, but the table could be easily extended to handle much larger numbers).

The full scale minus the zero offset value is calculated first and can be treated as a constant, K, for the entire linear data set for the segment under question. The conversion of any linear number then decomposes to: 5* ( K - lookup(linData[i] - zero_offset))

This is four arithmetic operations per value to be converted (less the very fast logical shift operations). This could be made even faster by factoring in the multiplication into our lookup table. The resultant returned values adhere to the Bellcore format.

A real advantage to our method is that, by having the resident system pass in a full scale and zero offset, this method is decoupled from any hardware dependencies.

2. FINDING EVENTS

2.1 Introduction

Events from our perspective are simply anomalies in our data set. Essentially, and somewhat simplistically, what we are asking an OTDR to do is to automatically find those anomalies on a fiber that our eyes can see. A rough, but reliable method for checking the accuracy of our event finding algorithms is to plot the linear data using a tool like Excel and see if the anomalies identified match what our eyes can see, sometimes with magnification. Often, the algorithms will identify anomalies that we can only see by greatly expanding the graphical representation of the data.

We have determined to decouple the event finding algorithms from the hardware as much as possible. To this end, the only required input from the host system is the number of samples in an approximate actual or apparent pulse width ( Np ) which is used to correlate the data. An apparent pulse width is an actual pulse width times all components that “stretch” the size of the pulse width; usually an effect of the band width of the hardware. A precise Np value is not necessary, although the more accurate the Np value passed the more accurately events can be determined (this translates both to finding smaller events and better determining the location of an event. (An Np value +/- 50% of the apparent value will work for the most part.) The data is assumed to be in an array of values in Bellcore format ( Wfm[] ).

To the same end, of removing hardware dependencies from event marking, we have elected to implement a series of sieves to separate “real” events from “false” events. Any one sieve is not designed to separate all real from false events. The sieves are logically independent of one another, however each sieve only acts on data identified as a possible event by a previous sieve. By this scheme, a new sieve or sieves may be added or an old one modified without effecting the overall method of finding events. 2. Setup For The Sieves

Function: makeSlopeArray, preprocessSegment Our series of sieves serves to identify all of the anomalies in a data set. To start, we derive from the data a set of mean y-values (gain levels in Bellcore counts over a pulse width of data), line slopes (mi) and “anti-slopes”, y- intercepts(Ii) and “anti-y-intercepts”, placing these in temporary arrays. These values are calculated using a sliding window one pulse width wide as follows:

Set up a the initial values for the data set:

Then, for each data point in the Bellcore array calculate the sliding values: The “anti” slopes and y-intercept can are calculated by executing the same algorithms for calculating the slope and y-intercepts, except that the data set is for Wfm points less than i --- i.e. Wfm[0]…Wfm[-1]…Wfm[-2]… We note that this is the same as the calculated y-intercept and slope for Wfm[i – Np]! All of this is accomplished in the function makeSlopeArray . Note that this is a new and unique method that fits a line over a set of data points (our [ ]

* [ ] * ( 2 )

2

[ ]

12 /( * ( 1 ) * ( 2 )

1

0 0

0

0

0

0

0

0

0

0

1

0

m X

Np

I Y

X i

Y Wfm i

m KK Wfm i i n

y Wfm i

KK n n n

n Np

n

n

n

n

= −

=

=

= −

=

= + +

= −

2 * )

2

0

0

1 1 1

1

j j

j

j

n

j

n

j

j j j n j j

j j j j n

m X

Np

Y

I

Y Wfm i

X i

m m KK n y y Y

Y Y y y

= −

=

=

= + + −

= − +

− + − −

− +

window) without the burden of calculating the standard deviation and y-intercept using the least squares method. This is unique and much faster!

Next, we calculate the starting slope from the end of the (nominal) 512 “laser off” sample points on the Bellcore/log data array in the function preprocessSegment . A sequential series of points are stored in an array. Those points are sorted, least to greatest value with the median value in the center. The lower and upper quartiles of this data set are discarded to compensate for “outers” and real events that might be in the data set. Then, the mean of the inner quartiles is calculated. This is our initial “seed” fiber slope. PreprocessSegment also calculates the launch power for the fiber test from this information and stores it for use in calculating reflectance. Then, the whole fiber is analyzed for slope changes at 512+Np[n] points. That is – beginning after the last “laser off” sample point and for every Np points thereafter we analyze the fiber for a slope change. This analysis consists of summing Np slopes forward and (separately) Np slopes backward from the point under consideration. If the average of the forward and backward slopes are within +/-0.5 of each other and, if this pattern is repeated 10 times, then and only then do we consider that we have a slope change. Finally working backwards and using the slope and antislope values like butterfly wings we search for the exact location of the slope change. That point is an event where two fibers having different lose characteristics are joined. But it may be that the event joining the two fibers is too small to detect and the area where the slopes change will indicate such an event! We will mark such areas as events, regardless, and attempt to pin down a end-of-event location and event loss using this information. (Note that this only identifies events via indirection. Events, also, can occur on fiber where no slope change takes place before or after the event.)

It must be noted that the slope figures calculated are for point-to-point over a pulse width. A slope of (say) minus-2.0 means that the trend of a line is downwards (away from the launch power value), two points for each change of x value.

3. Slope Trend Finder

Function: findSlopeChanges The sieves are dependent on the fiber slopes. The function findSlopeChanges finds all of the slope changes on the fiber and places the location where the change occurs and the slope value in an array. This is easily accomplished by checking for slope changes on the fiber over an area equal to some multiple of the sample density. For a sample density of (say) 10: if ten Np width occurrences of a slope occur AND if those slopes are different from the previous group of slopes, then a slope change is said to occur and is logged in the slope change array. Note that we will only find slope changes that are NOT a part of an event – indeed we only identify slope changes that occur AFTER an event.

This is valuable information. Through a process of indirection we have already identified a set of events on the fiber because, by definition, an change in fiber slope identifies a splice between two pieces of fiber having different refractive indexes!

4. SieveA1 Function: SieveA1 Our sieves makes an assumption that the width of any “real” event is at least one pulse width wide and that the number of samples in a pulse width is five or greater. The first part of this assumption is easily verified if you look at the effect of sliding a window along a data set while calculating the slopes (or, for that matter, almost any other value composed of all of the sample points in that window). Slopes 0 0 0 -a -b -c -d a 0

DRAWING 1.0

As can be seen in our example, the effect of sliding a window along the data representing a non-reflective event is increasingly negative slopes. For the leading edge of a reflective events, as you might expect, the effect is increasingly positive slopes. In fact, we note that the number of slopes tending positive or negative is precisely equal to the number of sample points composing an anomaly or event We rely upon this fact for our first sieve. Setting up an array, corresponding to the data points on a segment, if we see a series of Np slopes, all positive or all negative, we mark that window as a possible event region. Note that the first in the series of the slope trend occurs before the start of the event (disregarding noise, exactly Np–1 points before the event). Also note that our first sieve serves to sort events into positive and negative slope trends. Our first sieve marks positive going trends with a “1” and negative trends with a “–1”. Later this information can be used to distinguish the “up-side” vs. the “back-side” of reflective events, non-reflective events from gainers or reflective events, etc. (Note that the leading edge of a reflective event going “straight up” will exhibit a set of Np-1 positive slopes followed by a slope of zero. This is accommodated in our implementation.) As might be expected, random noise can mimic slope trends, causing a certain number of falsely marked events. We would expect to see false events in (roughly) 1/2Np cases. For an Np of 5, that is 1/32 of the time. As Np increases, the number of falsely marked events will decrease. We can also expect that noise would cause a certain number of event regions to be “stretched”; e.g. the actual start of the event is beyond the point i+Np-1. 2.5 SieveA2 Function: SieveA2 Our second sieve relies, again, upon something evident from our illustration. In an ideal setting, if we consider the absolute value of our slopes, the slope values in a series leading to “real” events will go from 0.0 and increase until some slope lies wholly on the event. This will be the largest slope value. Then, the slope values will decrease until, again, a slope of 0.0 marks the end of the event. In a non-ideal setting (i.e. with noise) this even distribution of slopes still holds true – the slopes calculated for an event region will show a normal distribution. Now, noise can be either synchronous or asynchronous. Synchronous noise is usually a hardware problem that is thought to be eliminated from Spice. If present, a special sieve will have to be constructed to deal with it. Asynchronous noise, however, is by definition random. It is asynchronous noise that we refer to when we discuss noise in this document. Asynchronous noise may occur symmetrically or asymmetrically around a theoretical central axis. This kind of noise, in a given instance, might even mimic the characteristics that define a “real” event In a given data set there is a probability (very small) that noise will mimic an event and will be marked as such. However, an event mimicked by asynchronous noise is not repeatable (synchronous noise might mimic an event, too, and may be repeatable – if severe it can make it impossible to design accurate OTDR software). Whereas a specific instance of noise might mimic a “real” event, we would expect that (in another test of that same fiber) the next instance would be missing the falsely marked event. Slopes + + - - + - + + - + + - + - - + - + + + + + - - DRAWING 2.0

Given a random set of Np positive or all negative slopes, falsely marked as an event it would be a rare occurrence for those slopes to exhibit the even distribution we can see with “real” events. In the illustration (above) you see a series of five positive slopes, but the absolute values for those slopes are all over the place. This is asynchronous noise.

Our second sieve only looks a points previously marked as possible events. Then, it looks at that point and ahead Np-1 points for the distribution of the slopes. An even distribution of slopes is taken as marking an event region . An uneven or skewed distribution is taken as marking noise. These facts are marked in the i i+1 i+2 i+3 i+4 i+5

DRAWING 3.0

previously defined array of possible events. If a region marked by a “1” or “-1” passes the second sieve test (i.e. it has a more-or-less even distribution of absolute slope values over Np points) then multiply the “1” or “-1” by “2”. An alternate manner for looking at the same characteristic is to search for the largest absolute slope value in a group of slopes (i+3 in the above example) and check that (Np-1)/2 slopes to either side of the largest slope are less than it. In our example, for an Np=5 (above), we would check that: abs(i+3) > abs(i+1) AND

abs(i+3) > abs(i+2) AND

abs(I+3) > abs(i+4) AND

absIi+3) > abs(i+5)

This has the advantage of forcing the checking of a window Np points wide, but it is slower. We are continuing to check the advantages of these two methods against each other. Both appear to work equally well over the set of real and simulated waveforms we have fed them.

i + Np

i

6. SieveA3 Function: SieveA3 The third sieve uses the results from the first two sieves and parses events into negative going non-reflective events

(NNR) and reflective or positive going non-reflective events (PNR). After parsing, the actual start of the event is located by finding a set of Np/2 points that are all higher or lower than the suspected start point (this is also an event test). In the case of reflective and PNR events, the two types are separated further by applying numerical checks based on the shape of the two event types.

In the case of a reflective event, a point i+2 will be higher (greater) than the point Np+i+2. For a PNR exactly the opposite will be the case --- the point i+2 will always be lower (less) than the point Np+i+2. For “insurance” in both instances the points i+3 and Np+i+3 are likewise tested. Next, a statistically derived threshold is applied to those points identified as events When the signal to noise ratio is less than 3.0, the point as discarded as an event (where the signal to noise ratio is defined to be five times the log10 value of the linear value of the point just before the event less the zero offset and divided by the standard deviation of the noise). Finally, for NNR and PNR events a more-or-less arbitrary event threshold is applied to the event. This threshold determines the minimum size under which a loss or gain will be considered an event. For our purposes, this threshold is set as 0.03 dB – an anomaly, in order to be called an event, must have a gain or, in the case of NNR events – a loss, greater than 0.03 dB.

The third sieve works quite well for all cases except where Np < 5. As a consequence the minimum Np for our system is 5.

Finally, the identified events are checked by the function NRrefine and RRrefine . These functions, respectively, finally check and measure non-reflective and reflective events (NRrefine handles both NNR and PNR events). In both cases the respective function identifies the actual start and end point for the event and measures the loss. 7. Non-Reflective Event Sieve

Function: NRrefine The function NRrefine is used to verify and measure non-reflective events. Both positive going and negative going non-reflective events are passed to this sieve. First, an imaginary line is drawn backwards starting from the point preliminarily determined (by sieveA3) to be the start of the event. Then, another line, exactly one pulse width long, is drawn forward from this point. A least squares number is calculated for these two lines. In a similar fashion, the starting point is moved forward one point and the least squares calculation is repeated. Where the least squares calculation returns the



Contact this candidate