I stumbled over a datasheet for an LM399, and found an interesting point in it.

One of the first things the manufacturer points out is that the component has a tolerance of 2%. This might seem horrid for a voltage reference, and technically, yes it is horrid if one is looking for high tolerance and doesn't have a voltage standard at hand.

But since the LM399 isn't a calibration tool, nor a voltage standard per say, in and of itself it is just a diode and a resistiv heater in a fancy thermally isolated package. Then we don't technically care if it is off, since if we use it in a product, then we will have a voltage standard at hand to calibrate it against.

The main thing of importance is aging and thermal drift. The later we can technically handle rather easily. The temperature coefficient of the device is rather good, and we can put it within a temperature regulated enclosure. Leaving the long term drift of the device up to how it ages with time.

This means that the factory tolerance of the component could be 10% and we technically wouldn't care. Or would we?

Since our calibration circuit will mostly consist of tapping off the voltage over the diode and amplifying this to a more common value. Our calibration happens in the feedback network of the amplifier. This means that we can adjust our output voltage to whatever voltage we wishes for, as long as this is within our adjustment range.

So why would the manufacturer of a device like this state 2% in their datasheet if it technically doesn't matter all that much.

The answer is simple, if we have a fixed resolution to adjust with, then the smaller range this resolution is used on, the higher effective resolution we get as a result. Meaning that we can trim our output voltage with a finer degree. An example of this would be if we only have 1 ten turn potentiometer to use. Then we rather use those ten turns over as small of a range as possible.

If we for an example want 10 volts, and we know the device has a 2% tolerance, then we know that we need to have an adjustment range of 9.98 to 10.2 volts, or 0.04 volts per turn on our potentiometer.

Compared to a 5% tolerant component that would have 0.1 volts per turn. Meaning that this will be harder to adjust to the same end tolerance.

Yes, we could add a cores and fine adjust, but this adds stuff to our feedback network and has a large tendency to increase the output noise.


There is though yet one more way to fix this problem. If this reference voltage is used to compare measured values against, and our measurement method has sufficiently high resolution, then we can do our calculations in software based on what we know the reference is, instead of needing to fiddle with adjustments. This meaning that our output amplifier will likely only work as a buffer, and maybe have some marginal gain that is created by a resistiv divider using some low thermal drift resistors.

But if one can't do it in the digital domain, then having a bit higher tolerance on the component can be of help when aiming for higher tolerance as the end result.
Add a comment...

Parallel or serial?

This is a question I have been thinking of.
If we have two things that we need to send signals between, then should we use many pins and send the data over in parallel, or is it better to use fewer pins and send it in serial?

A quick notice here, when I say parallel and serial here, I do not mean that the parallel signals needs to synced, as in a parallel bus, but rather that we just have more signal paths over our connection, these could for all we know be independent. While serial refers to sending the same data over fewer signal paths. Both systems will likely need some encoding scheme or another.

For everyone thinking that the number of signal conductors will be set by our application, then yes this is true in a lot of cases, since we can't always have control over how many pins we wish to use.

But the question is, is it better to use more conductors, or fewer?


As an example, we have a 32 bit buss running at 40 MHz going from one board to another. Should we use a shielded ribbon cable between these two, or is it better to place in a mux/demux system and use a smaller shielded cable with fewer conductors between them?

This is though taking the assumption that there is a distance between the boards. So here, dependent on the distance, we would likely look at signal integrity, and the cost of the cable. If it is a long distance, then likely, it is cheaper to use a serial implementation.

But if these two boards are just having a board to board interconnect between them, then we would likely rather use a parallel solution instead. Since it likely has a lower associated cost with it.


But is this always the case?

After all, these 32 conductors for our bus not only needs to go through the connector, but also over the board. But using serial that has higher switching frequencies on our buss will typically need tighter tolerances on the tracks, that also being a cost.

But, if we already have such high speed protocols on our board, or high pin pitched devices, then this associated cost of using serial is already covered for in that regard. At least on one board.


Not to mention that it all gets even more messy if we wish to send more then one signal between the boards.

If we for an example have some other independent bus going between the boards as well, then should we try to bake it into our other one, or leave it alone?

If it already is a high speed serial protocol, then we likely would leave it alone. Since it doesn't use many pins to start with.

If it is a large parallel buss, then do we encode it as its own serial bus, or do we send both parallel buses over the same serial one, or as two independent parallel ones?


In the end, it all boils down to the cost of the connectors, the parallel to serial conversion, and if we have space on the board. Also if we wish to add complexity to our product.

After all, adding those serial to parallel converts is another device on needs to keep in stock, yet another thing that can be delayed and halt the manufacturing of the board. Not to mention being another point of failure.
Add a comment...

Sometimes I question the sanity among developers....

I stumbled over a micro controller from Microchip, this micro isn't anything special really, but its pinout makes me curious...

It is the PIC16F887
Here is the datasheet for it: http://ww1.microchip.com/downloads/en/DeviceDoc/41291D.pdf

Following the pinout of our IO pins, for the 44-pin TQFP version of the PIC16F887, then it is seemingly arbitrarily placed around the chip.

Since IO 1 is at pin 19 (This in itself isn't too strange.)
Then it continues logically to pin 24, that is IO 6.
But then the developers decided to jump to pin 31, to then go to pin 30. So the direction has changed, not to mention that there were 3 other IO pins in between (IO pins 33, 34 and 35), and two power pins.

Then to reach IO 9, then we jump over to pin 8, then it continues to pin 11 in a logical fashion, before jumping to pin 14 to continue to pin 17.

After this it takes a trip up to 32, followed by 35 to 37, then 42 to 44, before going to pin 1, and then over to 38 to continue logically to 41.

If one were to draw a line following the pin order, then it becomes a real mess. And this isn't the first chip I have stumbled over to have a near insane pinout. My question is though why?

I have stumbled over similar madness among other types of components too.

Like RF mixers that have the RF input and the IF output placed next to each other, despite it being a six pin package, and that they could have placed a grounded pad between the two. Not that it is going to make a huge difference in the specifications in the device, but it is at least a step in the right direction. Since even if it makes a 1 db difference, then that is still something.

But so far, I haven't seen many chips with logical pinout. And most that have logical pinouts are either SRAM, or ADCs, and low pin count micro controllers.
Add a comment...


Random Access Memory, this is an important part of most computing systems, and therefor I wonder, how can it be that we can buy many billions of bits worth of RAM for next to nothing.

After all, 1 GB of DRAM is not that expensive, and considering that this most often is a 8 Gb device, and the typical DRAM memory cell uses 1 transistor per bit, then this device is having at least 8 billion transistors.

But manufacturing RAM is though a question of interest, as what makes RAM this cheap?

Just think of it like this, we can buy 32 GB of RAM for around 200 USD, that is more then 256 billion transistors for 200 USD, any other silicone chip would be expected to have a higher price per transistor then RAM.

After all, even buying a decently high end microcontroller is more then a thousand times as expensive per transistor. (A typical PIC microcntroller has less then a million transistors, and costs more then a dollar.)

I have been looking around the market, and nothing seems to be as cheap to manufacture as RAM, and I have been curious as to why?


I have been doing some research on the subject as to find an answer to the question of why RAM is so cheap, and there are many contributing reasons for why.

1st reason for why RAM is cheap is that most products that needs a lot of RAM will use an external chip. This is because these external chips will be cheaper for the same amount of RAM. This means that there is a big market for RAM, and that manufacturers of RAM have a higher chance of finding a buyer, this reduces risk, and allows for more manufacturers to shear this market, and for competition to quickly lower profit margins.


2nd reason for why RAM is cheap is more subtle. If we produce a chip, we might have it in production of a few days, weeks or a month. This is because semiconductor factories are really good at producing millions of chips every month. (Do note, the exact amount of chips a factory can produce varies greatly from factory to factory.)

While RAM on the other hand is expected to be far easier to sell, as after all, if the price is right, people will buy it, it is almost a jelly bean part, therefor a factory producing RAM can run for a year or two at a time. This reduces downtime that our RAM needs to pay for, as we aren't switching between different chips once or four times a month.


3rd reason for why RAM is cheap is in packaging. Stacked chip packages are not too uncommon in the RAM industry, After all, if we want x amount of memory in 1 package, and each chip only has 1/8 that amount, then we can stack 8 of them, and have them shear the same pins. Do note that the chips do need to support this, but this wouldn't be done if they didn't.

Then there are manufactures that produces through silicone vias, and stacks the chips without the need for bond wires, reducing the cost even more.

Chip scale packaging isn't uncommon either, eliminating the need for die holders, and metal pins, both of these cost money and reduce manufacturing yield, therefor taking these away decreases losses in manufacturing and reduces the manufacturing cost.


These three reasons are the main reasons for why RAM is comparatively cheap in comparison to other chips.

But are there methods of making it even cheaper?

And the answer here is simply, yes.


After all, stacking chips cost money, and if defective chips are in this stack, then we might end up with a non working product. After all, manufacturing yield is important within the semiconductor market, and is a major concern when producing products at a tight profit margin.

Here manufacturers can take many different approaches to the problem, this can be anything from adding functionality to test the chips on power on, and have a replacement chip in the stack ready to take over the place/adress of the defective chip.

On top of this there are other methods, everything from 3D semiconductors, to clever usage of the DRAM memory cell.

After all, a DRAM memory is nothing more then a switching matrix connecting capacitors to drive circuitry, there is nothing stopping us from storing a value in four more more distinct voltage levels. Downside here is that the time between refreshes gets shorter, as to stop unwanted drift. Do note that the drive circuitry needs to support this type of functionality for this to work.

Other then this, there are also other technologies on the horizon.
Add a comment...

I have as of late been thinking about pin pitch.

So why is pin pitch important, and what considerations do one need to take with different pin pitches?

I'll be warning people here, I will later take a sudden deep dive into BGAs, as they are a nice example of different problems with fine pin pitch devices and I also postulated a few solutions to the particular problems of such devices.

My guess that most people working with electronics knows about the 0.1 inch pin pitch, or 2.54 mm, and this pin pitch is actually rather nice as an industry standard.

The advantages with a pin pitch in the 2-3 mm range is that a device can have a fair amount of pins and is therefor able to give us a lot of functionality, this can for an example be a micro controller. Second advantage with this pin pitch is that is is easy to hand solder, not to mention easy to probe when trouble shooting.

Downside with a pin pitch in this range is the number of pins we can have on one device. As if we wish to have a thousand pins, then it isn't generally that trivial to get with a 0.1 inch pin pitch, or at least without creating a huge device in the process. After all, even if we make it a pin grid array, or a ball grid array, then it will still be 10 square inches in area, or almost 65 cm^2. In other words a rather huge device, and rather unpractical as well. Positive side is that it will be easy to route the PCB traces for such a device.

So from this we can see that for devices with many pins, then finer pin pitches can be of help to reduce the component footprint. This can be anything from 0.05 inch (roughly 1.3 mm), 1 mm, 0.8 mm, or even something crazy like 0.2 mm.

Downside with there finer pitch devices is that they are generally harder to solder, and can lead to more manufacturing errors then a larger pitch device.

And I have for a while been thinking about why it is so hard to solder a BGA package compared to other devices. Or is it simply chance?


I like to say that nothing is only up to chance, but rather up to manufacturing tolerances. And to what I know, the few things that leads to poor soldering quality with BGAs specifically, is mostly dependent on the size of the solder balls on the device, and how flat our circuit board is.

If these solder balls are too large, then they will create shorts between our pads. So generally we can say that less solder is good, or is it?

If the balls are too small, then it will not compensate for the unevenness of the surface of the board. This unevenness is created as a side effect of etching traces in the PCB, this leads to an uneven coating of copper, and therefor through heating the board we will get uneven thermal expansion, leading to a loss of surface flatness.

So that makes me wonder about possible solutions, if we were to polish the circuit board to a mirror finish, this is through a method called lapping that will ensure surface flatness to a greater degree then what we would typically find on a standard printed circuit board. This would greatly reduce the need for solder and lower the risk of creating shorts.

Here is a source for those interested in reading more about lapping: https://en.wikipedia.org/wiki/Lapping

But as stated before, uneven thermal expansion will still effect this method, therefor this polishing needs to happen close to the melting point of the solder one intends to use. This is to ensure that the board reasonably flat when the solder is fluid.

So through this method we can lower the amount of solder to not risk solder bridges between pads.


Downside is though the times when a solder bridge isn't of concern, but rather not having a connection between the device and the board, this can also lead to device failure, and is of equal importance.

Fixing this second problem with BGA devices is also hard, as this needs a different solution. A solution I have been thinking of is by using the surface tension of solder to help it bridge the gap between chip and pad, this can be done by only having a small needle/pin sticking up from the pad.

This needle is technically not hard to affix into place, as it needs to be nothing more then a short piece of wire that is easy to coat in solder, this can for an example be a gold or copper bonding wire typically used within the semiconductor manufacturing industry.

These wires do not need to extend far from the surface of the board, even 0.1 mm will technically cover most applications. And this can be made by affixing the bonding wire to the copper pad, then make a gentle bend in the wire to form a small arch before welding the other end of the arch to the same pad.

The two biggest downsides with this method are that placing down bonding wire in this fashion isn't without extra cost, but for low volume manufacturing involving very expensive BGAs, then this can technically be worth it overall.

The second downside is though smaller, and this is that stacking the underpopulated boards on each other in a traditional fashion will destroy these arches that one so carefully made. Therefor it would be a recommendation to have such a arch building procedure as close to the board population as possible.


Another solution is to not even use solder to start with, this might seem even more crazy then the two other solutions. But what we can create the same arch on the BGA device itself, replacing the solder balls with small "leaf springs" made of for an example gold coated copper bonding wire. Placing for an example two of these arches on each pad on the device, then through pressure we can press the device onto gold pads on the PCB.

This will give us a solder free connection between device and board, and no risk for solder bridges. Only risk left is the risk of one of our spring contacts leaning over and shorting a nearby pad, or not have enough reach to make a proper connection.

This form of solution is in reality going to have a large amount of problems associated with it, and is nothing but a thought experiment of mine. But a spring grid array device would have advantages like no need for thermal cycling, and the ability to be easily replaced if any problems are detected, as it would be held in place by a bracket.

And technically, a spring grid array package kinda already exist, or the reverse of it does. And this is typically found in Intel's LGA socket, as the CPU sits on a small PCB that has gold plated pads on it, and for each of these pads, the CPU socket has a corresponding spring in the socket. So technically, a solder free reliable connection involving a spring already exists on the market. These devices though only have a pin pitch of a bit over 1 mm, so are technically not extremely fine pin pitch devices.


Another device package with generally very fine pin pitch is quad flat packs, there can range anywhere from over a millimeter pitch, but 0.5 mm pin pitch is also common, as well as 0.4 mm, and even 0.3 mm and less exists on the market.

The problem with there are the formation of solder bridges between the pins, these bridges can lead to device failure among other problems. The positive side of QFP devices is that one can easily check for such bridges by visual inspection, as well as manually fixing the problem without too much work.


In general, the rule of thumb is, the finer the pin pitch, the harder it will be to solder.

And how to properly solder different devices is a topic in and off itself.
Add a comment...

If we are building an R2R digital to analog converter, then what precision does the transistors need to have?

The answer to this is dependent on the number of bits we wish the converter to have, and also on the accuracy that each step is of equal size.

But what rule of thumb should we follow?

First we will look at a converter with very few bits, in this case 4 bits. I'll also go through how this converter is built.

This will consist of four buffed Digital inputs, each connected to their own resistor, these resistors are of equal size and will have the names 0, 1, 2 and 3. They are also numbered in ascending order, from least significant bit (LSB), to most significant bit (MSB).

We will connect a resistor between ground and to the side of resistor 0 that isn't connected to our buffer (This will be the case when referring to any of the numbered resistors). This resistor between 0 and ground will have an equal value to resistor 0. This forms our first digital to analog conversion state for our least significant bit.

Then we connect a resistor between 0 and 1, this resistor will though have half the value of the other resistors. This forms the second stage in our conversion. Why this resistor has half the value I will get into later.

Next we will repeat the prior step by connecting resistors from 1 to 2, and from 2 to 3. Then our analog output will be from resistor 3 for this 4 bit converter.

The reason for why the resistors between 0 and 1, 1 and 2, 2 and 3 are of half the value then the other resistors is for a relatively simple reason.

This reason is for that stage zero for the least significant bit will output either ground, or half of our reference voltage. If this stage outputs ground, then the resistor to ground, and the resistor to our buffer both goes to ground, therefor these two resistors are in parallel, this means that these two resistors + the half valued resistor to the next stage will result in an equal resistor to resistor 1.

Effectively, an R2R digital to analog converter is nothing bu a series of voltage dividers in series.

If we say that our reference voltage is 1 volt. Then stage zero selects between 0 volts and 0.5 volts. Our next stage will select to either increase this voltage, or decrease it. Effectively, if our prior stage is high, this stage will produce either 0.75 volts, or 0.25 volts, if our prior stage is low, then this stage will output either 0 volts, or 0.5 volts.

Simple stated, each stage will either add or subtract half the voltage between its input from the earlier stage, and to the output of the buffer in the stage itself.

So for our 4 bit converter, then if stage zero in our converter is high, and all but the most significant bit is low. Giving us the digital input sequence of 1001, then the voltage at each node through our converter will be as follows:
(Reference voltage is still 1 volt, to keep everyone on the same page)

At the node of resistor 0, there will be 0.5 volts in reference to ground.
At the node of resistor 1, there will be 0.25 volts to ground.
At the node of resistor 2, there will be 0.125 volts.
At the node of resistor 3, there will be 0.5625 volts.

And here I will be quick to point out that this is not true if you actually go and measure the voltages in reality. As this calculation is not taking into consideration that the first and second stage is in reality pulling down stage zero to lower then 0.5 volts, and in turn also pulling themselves low. This equation explained above only works when calculating the output of the last stage in our converter.

But if we were to set stage 0 and 1 to high, stage 2 to low, and stage 3 to high. Then the equation becomes a bit complex: (1-(0.5+0.25)/2)/2+0.375 = 0.6875 And in our example converter the output of this digital input would in reality also be 0.6875 volts. But the actual values at each resistor will be: ~0.57031 volts, ~0.64063 volts and ~ 0.53125 volts while our output is 0.6875 volts.

Another method (and a far easier one) of calculating the output voltage is simply to say that the most significant bit adds half our reference voltage when high, our second to most significant bit adds one quarter, and so on down the chain until we reach our least significant bit that in this case adds 1/16th of our reference. 1/256th if it is an 8 bit converter, and 1/2^n on a converter with n bits.

Then we simply add together the values gathered from each bit, and sum them together into what will be our expected output.


But back to the question at hand, how much precision does our resistors need to have?

Could we build a 32 bit converter with 1% accurate resistors?

Do note that I used the word "accurate" and not "precise", these two terms are scientifically different things, accuracy is how close to an international standard they are, and precise is how close they are to each other. A box of 1% accurate resistors can have a worst case difference from each other with 1ppm (0.0001%) for all we know from the statement. Though, generally this will not be the case. And even in this case these resistors would be around 23000 times worse precision then needed to build a 32 bit converter if they had a 1ppm precision that is.

Here is a bit more information about the differences between accuracy and precision from our fellow Wikipedia, if any of you wish to read more on the subject: https://en.wikipedia.org/wiki/Accuracy_and_precision

So how much precision is needed for an 8 bit converter then?

I'll start with 8 bits to make the numbers easy.

First we will give each resistor a simple and easy name. As after all, I haven't found a nice and convenient way of adding pictures in the middle of these posts as of yet. (I guess Google didn't expect people to make long scientific posts about electronics in this fashion on their service....)

First resistor to have a name is the one between our buffer, these will have the same names as they did before, from least significant bit, to most significant, in ascending order.
The resistors between each stage will have the name I0 for input to stage 0, I1 for stage 1, I2 for stage 2 and so on. Do note that I0 is of twice the value compared to all other resistors named I, and this resistor is also permanently connected to ground.

So now the question is, how accurate does I1 and 1 need to be?

This answer is rather simple, these two only form divide by two, and the precision of these parts can be as sloppy as you see fit for your applications.

Generally, here is the stage were we decide how precise each step will be. If we make are LSB (Least significant bit) matched very closely, then we most probably wish to keep to at least this tolerance through our design.

But lets say that our first two resistors are matched to 5%, so we say that we aim for 5k ohm, and resistor 0 is 4997 ohm, and resistor I0 is 5246 ohm, then these two resistors are within 5% of each other.

Then how precise does our next stage need to be?

First of, I1 will need to be close to 2500 ohms, but how close is the question.
And 1 will need to be close to 5000 ohms.

These two will though need to be about twice as precise as our earlier stage to maintain our precision in our converter from the first stage. And this will be the same for each subsequent stage. So in an 8 bit converter, our last stage needs to be 256 times more precise then our first stage, to get "perfect" linearity.

Do note that the relative value from one stage to the next matters more then the accuracy of each stage. So technically, each stage can slowly wander in accuracy, as long as the precision between parts are kept within tolerance.

And as our MSB effects our output far more then our LSB, then we can save in on cost and time by using cheaper and simpler manufacturing procedures for the lower bits, while spending more time on the higher valued ones.
Technically we could use 10% parts for the first two or three stages without suffering any major losses in precision or linearity. But we might from that point decide to spec better parts for the more influential bits in our converter.

So a simple equation would be, 1/2^n*100 = the amount of tolerance in percent needed. This equation though only gives exceedingly poor tolerance, and would effectively ensure that the device needs software calibration to have a chance to have decent linearity.

A better formula is 1/(2^n*K)*100 = T
K here is the amount of extra tolerance we wish to have, do note this formula works for each resistor in our series.

So if we wish to have a tolerance of 10% for our first resistor, then K = 10
Giving us the following list of tolerances for each resistor:
0: 10%
1: 5%
2: 2.5%
3: 1.25%
4: 0.625%
5: 0.3125%
6: 0.15625%
7: 0.078125%

Do note that K = 10 will result in a fairly linear device, but quickly result in expensive components.

But if we instead say that we wish resistor 2 to have a tolerance of 10%, then K = 2.5, giving us the list:
0: 40%
1: 20%
2: 10%
3: 5%
4: 2.5%
5: 1.25%
6: 0.625%
7: 0.3125%

And in reality, we can always software compensate the more significant bits, meaning that they as well can use lower tolerance parts. Using components that are having a tighter tolerance will just mean that we need to software compensate fewer components in our product, and this can result in things like quicker respons times, lower software overhead, and less memory required in the device.

Secondly, what if you don't wish to buy 0.01% components to build our high resolution, and high precision R2R digital to analog converter?

Here is a simple solution that one can try, first measure out some components with a multimeter with high enough resolution to get close, do note that accuracy doesn't matter (unless your multimeter has a serious drift problem), only resolution.

Then sort these resistors into groups of roughly equal size, in this method with only a four digit multimeter you can get the resistors within one group to be within 0.1% of each other.

Then from this you can build a Wheatstone bridge, with resistors from the same group. Then you apply for an example 20-30 volts over this bridge. (Do note that excessive voltage can lead to failure of the device if the resistors are of a low value.)

Then measure the voltage over the two sections of the bridge. Then swap two of the resistors, and measure again, with a little work one can figure out what resistor is higher then the others, and what resistor is lower. From here we can add a parallel resistor to the one with a too high value, this parallel resistor should be of a far greater value, here you can do a rough calculation using the parallel resistance formula to work out a suitable value.

From this method you can construct resistor modules with exactly the same value. Even if your multimeter only has 3 digit. This is the historic way of making high resolution resistors.

If you wish for even better matched resistors, then you can start by having two resistors in series, then place parallel resistors over this to bring the value into tolerance. This method has an advantage being that the max power dissipation of the device is twice to begin with, meaning that you can increase the bridge voltage even more.

Technically this can be done with even more resistors, but at one point or another it becomes a bit silly. And do note that eventually, the solder joints connecting your resistor module to the rest of the R2R converter will effect the in circuit value of your resistor module, meaning at some point, matching the resistor modules will eventually become rather worthless.

Other things to take in note are resistor noise, propagation delay and how that can create noise as well, especially at higher frequencies. And also that making huge resistor modules can make your device rather unattractive to the eye, my advice for all these point would be don't use more then 4 resistors, in a 2 x 2 array, preferably surface mount, and keep the values around the 2-40 kohm mark for the 2R resistors/resistor-modules, and 1-20 kohm area for the half valued resistors in your converter resistors/resistor-modules.

That is just a bit more then 2200 words, I hope you all liked it and found it informative.
Add a comment...

If we wish to measure gravity, then how do we do that?

An easy method is to have a very accurate scale, and a very accurate mass, and simply measure how much it weighs. As the weight of a mass is directly proportional to the gravity. This is only true in a vacuum, as after all the air will make it partly less heavy then it should be because of the partial buoyancy provided by the air the object displaces.

But what if we don't have a known mass, nor a scale?

Then we can do another setup.

The setup we can do is by having 3 optical reading brackets, in a vacuum chamber. This chamber can be a piece of pipe.

We will drop an object past these three sensors, and measure the time between each of them. If we know the exact distance between these three and the time it took the object to drop past these sensors, then we can from this make a calculation based on the average speed between the first and second sensor, and compare that to the average speed between the second and third.

As we know both length and time, we therefor know speed, and from that also acceleration. And as gravity is a "constant" acceleration on a mass, then we now know the gravity at this location.


An example device setup we can build is as follows:

At the bottom we will have a cone, this is to place our falling object at a know end location, this is technically not important for our measurement, but rather to make it easy to execute the measurement.

At the bottom of this cone, we will have an opening to a solenoid actuated plunger, this will part momentum to our falling object and make it first fly straight up through our measurement chamber.

Our measurement chamber will be a straight piece of pipe, with three sets of optical sensors, we use optical sensors to impart as little physical force onto our falling object as possible.

These three sensors we will need to measure the distance between, and the more accurately we measure this distance, the better our result will be.

Next we will need to evacuate our chamber so that it isn't containing any air, this is to make sure that wind resistance doesn't effect our measurement.

Our falling object will be a ping pong ball. This ball we will need to make a small hole in with a needle, this is for the fact that it can rupture if we don't let the air out of it when decompressing our chamber.

We also do need to note that our solenoid can't impart too much force onto our ping pong ball, as then it can hit the chamber top, and not give it a clean drop, this is technically not important, as it will only have a speed offset, and the acceleration imparted by gravity is going to be the same. But can by this also be falling on an angle, meaning that all speeds will be changed. After all, it is a round object and trigonometry will quickly make our day a lot less fun.

Do note that this system will reset itself thanks to the cone shaped bottom of the chamber. On top of this, we can also measure the ball de-accelerating when flying upwards past our sensors. This gives us two measurements and the ability for us to make an average.

In the end, all SI units needed for this measurement is Time and length. Both of these are based on fundamental physics. And Length also needs time. So in the end, one only needs a Time (Frequency) standard to make a measurement of Gravity.


This is so far one of the simplest methods I know of to measure gravity with relatively simple tools required.
Add a comment...

I have been thinking about low current and low voltage measurements recently, and I am curious, how can we efficiently protect our circuit from over voltage events?

Lets say we have a J-fet operational amplifier, with a large valued feedback resistor into its inverting input, while its non inverting input is connected to our second input terminal. While the inverting input to our amplifier is connected to our first input terminal of our measurement device.

In this application, we would want to have input protecting to this amplifier, but how do we implement this, and what problems can it create?

A simple solution would be to use a standard semiconducting diode, these have a threshold voltage where they are going to shunt the voltage over them to a low level, this being its forward voltage. Problem here is that the diode will still conduct, even if the voltage is far under its forward voltage, and as we intended to measure low currents, then this can mess with our measurement capabilities.

A solution here is to use a bias voltage on each diode, this is so that our diodes are always reversed biased when in the measurement region of interest. Effectively eliminating the problem, downside is though reversed leakage, unless our diodes are perfectly matched, as then then the leakage through each will be equal.

But in any regard, a diod is always conducting when forward biased, and therefor might not always be the best over voltage protection device of choice.

Another device is the spark gap, these though typically do need a large voltage. (Smallest GDT I found so far is 24 volt rated.) These are typically found as gas discharge tubes, or as jagged/toothed traces on circuit boards. Downside of the later is carbon buildup and eventual current paths.


Another application where I have been curious about is AC coupling on oscilloscopes, as through AC coupling we can filter out any DC content in our measurement, and then amplify the AC content. But what happens if we take our probe and stick it onto 100 volt DC (relative earth), then we have a 100 Volt spike that will couple through our AC path and into our amplifier.

We could do rough over voltage protection here, but at the same time it is a high frequency event, and as our analog fronted is meant for high frequency, meaning that the high voltage pulse will follow our analog path and to our amplifier. If we wish to stop this, then we at some point or another need to prioritize our over voltage protection over signal integrity. Meaning that we make a low impedance path to ground, and have a higher impedance path to our amplifier.

Do note that the impedance posed by a component can change over the applied voltage, meaning that in the end, a few simple diodes might be everything that this application needs. And if we wish to eliminate the current draw of the diodes, then we can bias them so that they always will be reversed biased when our signal is inside the rage of interest. And when the signal goes outside this range is will forward bias our diodes and see it as a lower impedance path.

I hope this read were of interest. Please comment if you have any idea of something I should read into and make a post about.
Add a comment...

If we were to develop a new computing architecture, to for an example replace x86 and the (IBM)PC, then I am curious to how much of a performance increase we could bring in the process.

Now doing something like this is sure not to be successful in practice, as a lot of companies are more interested in backwards compatibility, and the ability to use the same software on more then one device. And that switching to a new system no one is currently using is not going to be a normal thing.

And consumers are normally not much different, as we can ask the same question, why would anyone switch to a system that there is no software for.

Other then people that see a special use for the system, and hobbyists that do it for fun. And then also the people that likes to be early and invest in something potentially good.


Here the question rather is, what should one aim for when trying to make a competitive system?

First we can aim for making the system simple and easy to grasp, in other words not doing seemingly arbitrary decisions, keeping stuff simple to use and have an overall logical work flow. This makes the system more approachable by both hobbyists, and companies. As after all, if you have two systems, one has a 40 page A4 document with all the needed info about how to use it, the other system has a 6000+ page A4 document that contains the same information but about the other system. Where can we expect to spend least amount of time to get a working prototype of our software?

Second thing we should aim for is power efficiency, as this is after all going to make it into a system we can use in more applications. As after all, why use a computer that has 2 times the processing power, if it consumes 5 times as much power. As in such a case, the more power consuming system needs to be fairly application specific and good at what it is doing to be worth having.

Third thing is rather easy to see, this is performance. This can vary greatly depending on application, but generally, if we can offer better performance, then that is a good reason to switch.

Fourth, this is Security, this is a feature that we are going to need in one way or another, how we implement the security is also important, and here we might even look at building the whole system around some standard security practices. Though, how we implement the security can vary greatly depending on how our system is integrated, and is intended to work, and on a lot of other things as well. But one logical thing is that a simpler system with fewer parts will be easier to make secure.

Fifth thing we should aim for is a rich feature set, we should also aim for these functions to be useful for the intended applications of the system. And it is here that we can make the distinct difference between a general purpose architecture, and an application specific one. As after all, if we give our system a lot of features to handle things like data acquisition, FFT analysis, high bandwidth synchronous memory management, among other such features typically found in the system architecture of an Oscilloscope, then we can rather easily expect that the same system is going to have rather unimpressive performance if we were to for an example play a modern computer game on it.

And these five things that we should take into consideration are in no particular order, as after all, it is all going to vary depending on the intended applications of the system, and how we choose to approach each individual part of the overall system. As with every new decision we are at a risk of imposing more complexity, less energy efficiency, lower overall performance, and even a lack of security, these are all things we should take into consideration with each and every addition to a system.

But in the end, we do still need a simple to understand, power efficient, well performing and secure system that also works well for the applications we intend to market it for, these are the important parts of releasing a new system.

Things that makes it harder for such a release are backwards compatibility, old standards, and a reliance on preexisting solutions on the market. These are all things that will make a transition far harder, and is generally the reason for why x86 as an architecture has been around for almost 40 years. And the release of the IBM PC in 12 August 1981 is also a major reason for why it is still around.


My question is though still, will there be a new architecture coming forth in the next decade that slowly takes over, or will we stick around with what we have.
Add a comment...

If we wish to build a product with a long expected life span, then what design considerations should we make?

Here we can first start by looking at what a long expected life time is, as this can vary greatly depending on the application.

But we can say that we wish to make a product that should last at least 25 years, just to make it a challenge.

The first easy thing we can look for when selecting a component is to take the MTBF value into consideration, this value can typically be anywhere from 10 thousand to a million hours, depending on the component. This value stands for the Mean Time Between Failures, and is the time it would take a sample group of components to fail.

So if we have 50 capacitors with an MTBF of 20 thousand hours, then we expect around 25 of these capacitors do not be within specification or fail after these 20 thousand hours, some will start to fail faster then others, and some will work flawlessly for a lot longer, this is after all down to environmental factors, and manufacturing tolerances.


Second thing we can look at is how a component handles stress, over voltage spikes, among other things. This could for an example be how a transistor handles the induced voltage spike from sharply cutting off current flow, and how this will affect the transistor in the long run.

Here I already have hidden the small design decision in the text, we can make sure to never sharply cut a flow of current, and if we need to, then we should over specify the component to a far greater voltage rating then actually needed for the application.


Then we have dust accumulation, embrittlement of soft materials, electrolytic capacitors drying out, work hardening of copper wire leading to open connections. Not to mention tin whiskers forming between pins and other devices.

How do we fix these small problems?

Dust is rather easy, as we can build a fully encased product with no opening for dust to accumulate in through. This can though lead to moisture ingress and corrosion. This can partly be prevented with a dry desiccant, but this is not a permanent solution as it will only absorb a set amount of water.

Material embrittlement is a bit harder as this is down to the chemical properties and composition of the material changing overtime, this can partly be to solvents dissolving, but also to oxidization, hydrogen ingress, ozon, etc.

Electrolytic capacitors drying out is on the other hand an easy fix, as we could instead use ceramic capacitors, these are typically more expensive for the same specifications, but unlike electrolytic ones, ceramic capacitors don't typically fail or have any major specification drift over time. Unless exposed to solvents, temperature cycling, excessive voltages, mechanical impacts and so on. But we can still get a 16 volt rated 25µF ceramic capacitor for under $5 US a piece, so not far too expensive if the application needs it.

Tin whiskers on the other hand is a bit harder to fix, unless we go to the old industrial standard material of 60% lead 40% tin alloy once typically used before RoHS legislation stopped such from being used commercially without a reasonable cause. 6040 solder is technically not Illegal in most places, but just not recommended for when it is unlikely to develop whiskers in the expected lifetime of the product.


Next thing is the reliability of the semiconducting devices if such is used in our so far unknown hypothetical design. As what is the expected life of a transistor?
This is a rather good question, but considering that a lot of hobbyists can pull out a 20+ year old electronics project with recycled ICs from the early 70's and 80's and still have it work some 30+ years later is an indication that one can at least trust that a chip can survive so far. But here we should probably go into the more subtle details of integrated circuits to see when and how a chip is expected to fail.

To start with, the semiconducting device relies on the semiconducting properties of silicone or other semiconducting material. This material is doped with materials like Boron, Arsenic, Phosphorus, Antimony, Gallium among other materials. These materials are defused into different areas of the semiconducting substrate, giving the barriers between the regions different electrical properties.

The major failure point of these devices are in misalignment of the different regions, this will lead to the chip not working at all, and these chips will rarely make it out of the factory.

The second failure point is further diffusion of the doped regions, leading to a drift in the electrical specifications of the semiconducting device, this can lead to performance drift in analog parts, or even failure in the device for both analog and digital technologies. This is the reasons for why the specifications of a chip will drift with time, and sometimes even fail. This effect is typically called ageing.

And ageing is mostly affected by heat, and some manufacturers will burn in a chip by having it in an oven at an elevated temperature of around 80-300 degrees Celsius, depending on the chip, what semiconducting technology it uses and intended application. After this burn in period of anywhere from a few days to months at times even years is done, the chips are functionally tested before being sold or discarded.

A third thing affecting chips is electrical breakdown of insulation, this is more typical in mosfets, mostly as they have a very thin oxide layer as its gate isolation, making it sensitive to excessive voltages. This has already been described earlier in this article.

The last major reason to chip failure is thermal cycling.

So what precautions should one take when designing a product that needs a long expected life time?

There is still mechanical stress, vibrational modes leading to mechanical failure or the breaking of conducting traces on the circuit board, among components shaking lose with time.

Then there is also electro magnetic interference to take into consideration, among other things that can affect our unknown hypothetical product's chances of survival.
Add a comment...
Wait while more posts are being loaded