Profile cover photo
Profile photo
West Florida Components
159 followers -
Your source for electronic components, parts and supplies
Your source for electronic components, parts and supplies

159 followers
About
Posts

Post has attachment

Smarter Prototyping through 3-D Printers

The technology for 3-D printing has been available with us for more than thirty years, and in the industry it contributes extensively across the design, engineering, and manufacturing disciplines. This has allowed it to achieve a valuable and distinctive position of design and manufacturing methodology. Referred to also as additive manufacturing, it lives up to the promise of making businesses more competitive, as it offers tools to streamline and enhance the process of product creation.

Introduction of 3-D printers such as the F123 series from Stratasys makes them a cornerstone of rapid prototyping (RP). Engineers and designers often face several barriers in the RP process. The design of these new 3-D printers helps remove these barriers, and makes the RP process more productive and efficient. Using 3-D printers such as the F123 series, companies can produce better products faster, thereby reducing their time to market.

However, most industries tend to avoid embracing this technology or broaden its use, for a number of legitimate reasons. Small and medium sized companies find it difficult to justify the cost of investing in a professional 3-D printer. Most additive manufacturing demand a comprehensive knowledge of the process and equipment. For the industry, this means hiring new employees, an additional financial burden. However, staying with the old and traditional methods of design also poses several risks.

The new professional 3-D printer platforms such as the F123 offers a suitable solution, as it is designed to specifically increase the simplicity and efficiency of the rapid prototyping process. The new printers use the fused deposition modeling solutions to help companies become more competitive through adoption of RP, and by improving their existing RP processes.

The major challenge developers face is the time taken to develop new products, increasing the chance that a competitor will reach the market before they do. While most small-scale manufacturers do not have adequate resources to test multiple design iterations fully, the longer time taken by development tends to negatively impact their ability to generate new revenue.

The design of the Stratasys F123 3-D printer series meets the requirements of the entire RP process, from concept verification, to design validation, and functional testing. Moreover, it uses only a workgroup setting to do all this economically and with speed.
One of the major advantages of the F123 3-D printer series is they have multiple material choices rather than use less-optimal materials. These include durable engineering thermoplastics such as the ASA, PC-ABS, ABS, or the economical PLA.

This ability helps in the design validation phase as well, as it helps in producing multiple designs in less time. This allows the design team time to refine and optimize the design. As the printer can use engineering grade plastics that are highly durable, designers can also conduct functional tests on their design, ensuring the final part operates as intended.

As the prototyping process is conducted in-house with local 3-D printers, the intellectual property remains safe. Companies can thus avoid the risk of confidential design information falling into their competitor’s hands, as is possible when using the services of external service bureaus or machine shops.
Add a comment...


What is new in Touchless User Interfaces?

The technology of user interfaces is expanding rapidly. Apart from touch, it now includes several advancements such as gestures and speech assistants. For instance, we have Alexa from Amazon, as a part of the common lexicon, followed by Siri from Apple, Cortana from Microsoft, and Google Assistant, as speech assistants as consumers gravitate towards touchless user interfaces. Most of the assistants now include gestures apart from voice, thanks to the support from MEMS and sensors suppliers.

However, that does not mean we can disparage touch completely. Use or touch on personal computing devices serves us very well, among other keyboard-based applications, and this is likely to remain dominant for a long time. However, there are several instances where the use of voice would be more convenient. For the present, although most of us use typing/swiping, we do try using voice as well. Therefore, for the time being, even if we are holding the smartphones with a tight hold, we often touch the microphone feature also, using it as a speech-to-text converter.

According to Matt Crowley, the CEO of Vesper, using voice works very well for selecting music, TV shows, and movies. He adds that the “massive unstructured database of content that is hard to navigate through hierarchical windows,” makes it possible. According to Crowley, cases involving simple tasks such as opening a door, asking for weather forecasts, or setting a timer, suit voice commands very well.

Voice also requires devices such as smart earbuds and smart speakers in always-listening modes. However, such always-listening devices such as the MEMS microphones consume power, and therefore, must be power-efficient. Vesper and InvenSense, makers of MEMS microphones, have different approaches to the power-consumption issue.

The always-listening piezoelectric MEMS microphone from Vesper are designed to use the sound energy itself to wake up the device from sleep, where they hardly consume any power. While the analog-to-digital converters integrated into the MEMS devices used by InvenSense saves all the power that the ADCs would have otherwise consumed.

Moving forward with the designers, user interface is progressing beyond voice, as the market has a newer technology—gesture. This requires sensing motion, depth, and position of objects in three-dimensional space. To implement gestures, consumer devices are incorporating MEMS-based ultrasound time-of-flight sensors, such as in augmented reality, and virtual reality systems.

Most existing systems sense motion and position using light, either visible or infrared. That is why they cannot operate well in sunlight, as it tends to overload the optical receivers. They are also not good at detecting optically transparent or dark-colored surfaces such as windows made of glass.

In comparison, ultrasound has no problems in working with any lighting conditions, and it is not sensitive to the color of an object, and can see all solid objects. It also consumes very low power, as it does not have to compete with much background noise at the ultrasonic frequencies.

This technology is very useful in places where the touch interface needs too much attention. For instance, within the car, where searching for the right place to touch can distract the driver, it is more sensible making adjustments with a wave of your hand rather than searching for an elusive button on a screen to interact with the application.
Add a comment...


How Wireless Networking is Helping Industrial IoT

Along with the high level of activity related to Industrial Internet of Things (IoT), there is a rising need for wirelessly connecting the industrial sensors. At the same time, realization is rife about the networking needs of industrial devices and applications being distinctly different—they require a higher level of reliability and security—compared to that required in the consumer world. The requirements of industrial wireless sensor networks (WSN) are specific.

With the easy availability of low power processors, intelligent wireless networks, low power sensors, and big data analytics, there is a booming interest in the industrial IoT. Simply put, the combination of these technologies allows a multitude of sensors to be applied anywhere, without bothering about the existence of power or communications infrastructure. Although there have been purpose-built sensors and networks in industrial settings such as in oil refineries and manufacturing lines, the IoT concept has transformed instrumenting almost everything beginning from machines, pumps, pipelines, to rail cars with sensors.

So far, such operations technology systems existed as separate networks, filtering the available technologies to those best adapted for business-critical industrial IoT applications. Within the harsh environments typically found in industrial applications, methods of networking the sensors determine their safe, secure, and cost-effective deployment.

Reliability and Security

Rather than cost, which is the most important system attribute in consumer applications, reliability and security top the list for industrial applications. This is easily explained considering that worker safety, profitability, quality, and efficiency of a company producing goods relies on these networks. That makes it essential for industrial wireless sensor networks to be reliable and secure.

Improving the reliability of a network often involves building-in redundancy. That means a network has fallback mechanisms enabling the system to recover without any loss of data. In a wireless sensor network, redundancy can be of two basic types—a spatial concept and a concept of channel hopping. The concept of spatial redundancy in wireless systems involves every node communicating with at least two other adjacent nodes. This needs a routing scheme wherein data reaches the final destination irrespective of which node is receiving data. This concept produces a meshed network, and has a higher reliability than a point-to-point network.

The concept of channel hopping works by pairs of nodes changing channels available in the RF spectrum on every transmission. This helps to avert temporary issues with any one channel. For instance, the 2.4 GHz standard of IEEE 802.15.4 has fifteen spread spectrum channels available that users can use for hopping. That makes channel-hopping systems as far more resilient than single-channel systems are. In fact, users can select from several wire mesh networking standards including Time Slotted Channel Hopping (TSCH), IEC62591 or WirelessHART, and IETF 6TiSCH standards.

Security of the industrial wireless sensor network involves achieving confidentiality, integrity, and authenticity. Confidentiality essentially means the intended recipient and no other can only read data transmitted within the network. Integrity means receiving the data without additions, deletions, or modifications; while authenticity means a confirmation of the message really being from the source it claims to have originated. Most often, time is a part of the authenticity protocol, protecting a message from being recorded and replayed.
Add a comment...


Solving Old Problems with New Regulators

The first floating, three-terminal, linear regulator, such as the LT317, appeared on the market in 1976, and it has remained almost unchanged since then. Most designers using the linear regulators either settled on the floating architecture itself, or added an amplifier loop based on feedback from its output. However, there was not much to enhance the versatility, regulation, and accuracy, until Linear Technology introduced the LT3080 in 2007.

Now, Linear Technology has introduced the LT3081 with a new linear regulator architecture. The new introduction has significantly improved the performance of linear regulators, allowing parallel operation easily. It also features a wide safe operating area, making it eminently suitable for industrial applications. Providing an output current of 1.5 amperes, the output of the LT3081 is adjustable down to zero volts. It is reverse protected and offers monitor outputs for output current and temperature. Users can adjust the current limit by connecting an external resistor to the output of the device.

In the earlier architecture, a pair of feedback resistors set the output voltage while attenuating the feedback signal into the amplifier. Here, the regulation accuracy at the output depended on the percentage of the output voltage fed back. Therefore, although this maintained the percentage accuracy, the absolute regulation accuracy, in volts, degraded as the output voltage increased. At the same time, the bandwidth of the regulator also changed. As the loop gain decreased with the increasing voltage, the bandwidth reduced. This resulted in a relatively slow transient response and high ripple. Additionally, the earlier ICs had a fixed current limit. Users requiring different output currents had to use an external circuit to set the current limit accurately specific to an application.

The LT3080 changed all the above with its new regulator architecture. Based on a current source as reference, it had a voltage follower as its output amplifier. The LT3080 offered users a number of advantages. These included easy paralleling of regulators for increasing the output current, while operating down to zero output voltage. With the output regulator operating always at unity gain, the absolute regulation and bandwidth remained constant across the range of output voltage.

Rather than express it as a percentage of the output, transient response could now be specified in millivolts, and was independent of the output voltage.

The LT3081, an industrial regulator, has a wide safe operating area (SOA). While users can draw 1.5 A from the device, they can trim the output voltage down to zero volts. The input is reverse protected, which means the regulator will remain safe even if the input voltage is connected in reverse. For adjusting the output current limit, a single external resistor connected to the device is enough.

Arranged as current sources, the temperature and current monitor outputs operate from 400 mV above the output voltage to 40 V below it. Users measure the monitor outputs by connecting a resistor to the ground from these pins and reading across the resistor. This allows the current sources to operate even when the output is short-circuited.

Additionally, a single resistor is enough to set the voltage at the output terminal.
Add a comment...


Current Loop with the Raspberry Pi

Earlier, when pneumatic control systems ruled the industrial world, there were a number of controllers, sensors, and actuators powered by compressed air. Among these were ratio controllers, PID controllers, temperature sensors and actuators. Now electric and electronic controls are the norm, and it is more common to find four to 20 mA signaling. Wires are easier to install than compressed air pipes are, and the electronic systems are simpler to maintain than twenty to forty horsepower compressors are.

Being a very robust sensor-signaling standard, the 4-20 mA current loop is ideal for data transmission, especially in the industrial arena, where one finds all sorts of noise and interferences. The signaling current flows through all the components in the loop, even if the wire terminations are not perfect, and all components in the loop drop voltage as the signaling current flows through them. Following the voltage law of Kirchhoff, the signaling current remains unaffected as long as the power supply voltage remains greater than the sum of the voltage drops around the loop at the maximum signaling current of 20 mA.

A study of a simple 4-20 mA current loop shows it needs four components. A DC power supply to supply the voltage for driving the current through the loop, a 2-wire transmitter to convert the physical quantity to current, a receiver resistor for converting the current signal to a voltage, and a pair of wires that connects all the above into a loop.

The power supply generates a current flow through the wire to the transmitter, which regulates the loop current from a minimum of 4 mA to a maximum of 20 mA, and any value in between, depending on the measured parameter. After flowing through the receiver resistance, the current returns to the controller through the return wire.

Current flowing through the receiver resistor produces an analog voltage, which any voltmeter can read. For instance, if the resistance has a value of 250 Ω, the current will produce 1 VDC at 4 mA, and 5 VDC at 20 mA.

The PI-SPI-2A0 is a dual 4-20 mA current loop interface for the Raspberry Pi or RBPi. The interface uses a dual 12-bit DAC, the MCP4922. Each mA output by the device is also mirrored as a DC voltage output. However, the output circuit has to be powered by and external voltage supply of 24 VDC. A signal strength LED is available on each output, with the brightness of the LED a simple signal strength indicator. One of the outputs is set to 20 mA, and the other to 4 mA.

The reference voltage for DAC is 3.3 VDC. The MPC4922 has two channels providing the 12-bit conversion. The configuration of the output circuit provides full 20 mA output at 3 VDC. Therefore, the full output of 20 mA represents 3273 counts.

The current loop is inherently insensitive to noise. Theoretically, the output resistance of the current transmitter should be infinite, but in practice, the output resistance is of the order of 3-4 MΩ. Therefore, even if the noise source has an amplitude of about 20 V, the error across the receiver resistor would be of the order of 0.0015 Volts, which is highly insignificant.
Add a comment...


When should you use Rigid-Flex PCB?

With devices growing smaller by the day, engineers are finding it tough to fit everything in the box, and it is also getting more expensive. The rigid-flex printed circuit board (PCB) technology is a promising solution that can help designers meet the constraints of size. However, most design teams tend to avoid using this new technology because they think it will increase the product cost.

Most designs have the traditional rigid PCBs and cable assemblies within them to begin with. This construction works well for productions where the volumes are small, as the labor involved is high. Usually there are connectors on each PCB with wire interconnects, which add to the bill of materials cost. Additionally, the increased number of joints in this traditional design drives up the possibilities of some of them being cold, which reduces the service life of the device.

In contrast, the new technology of rigid-flex PCBs eliminates most of these joints, improving the reliability, quality, and longevity of the product. Therefore, apart from cost, other considerations are also making rigid-flex circuits more viable.

Compare the Costs

As rigid-flex is not a viable alternative for all designs, it is necessary for engineers to work out the break-even point, where the cost of using rigid-flex is about equal to that of retaining the traditional design. This price comparison is usually based on the total cost quoted for the fabrication and assembly for each technology. However, before going for the quote, designers need to work out much of the details, including the layer stack, estimated number of vias, track and space ratios, among others.

Even if everything remained same, an experimental manufacturing cost comparison between the two technologies provided interesting inputs. The BOMs differed only in the extra cables and connectors required by the traditional PCB arrangement, which comprised a set of four-layer boards. The rigid-flex design had a four-layer PCB with two inner flex layers. When the cost of assembly was added to the real PCB fabrication quotes, it was found the traditional design was cost-effective until the break-even point of 75 numbers, beyond which, it was more economical to use the rigid-flex design.

As rigid -flex PCBs do not require any cable assembly, the overall effort for the assembly is lower. This also reduces the test complexity, and together, they drive down the cost. The supply chain risk reduces, as there are fewer components in the BOM. Additionally, the design of rigid-flex PCBs makes the product maintenance more convenient. Therefore, using the rigid-flex technology is more cost-effective over the course of the products’ life cycle.

Design Time

For any project, the design and development costs also need to be factored in for viability along with the cost of manufacturing, assembly, testing, and logistics. Designing for rigid-flex requires the mechanical team to collaborate with the electronics team for the folding and fitting aspects of the design. As the flex portions may be folded, twisted, and rolled, following the design of the mechanism, it forces the designers to think and work in 3-D. This takes considerable time and effort, requiring PCB design software capable of defining and simulating bends and folds in the flex portions.
Add a comment...

Post has attachment

Infrared Rays to Deliver High Speed Wi-Fi

With the rising demand for data required for ever-increasing electronic devices, it is essential to have an adequate wireless network catering to them. Scientists at the Eindhoven University of Technology have come up with a technique that more than meets the Internet connection needs of these devices.

The answer lies in the infrared rays of the electromagnetic spectrum. Joanne Oh, a researcher at the university showed how these high wavelength radiations could be utilized to build a wireless network with a vast capacity. The convenience of the method does not end with its huge capacity. Since every device in the network can avail of its own radiation, there is no concept of sharing the data. Accordingly, the processing speed is not compromised.

Inexpensive and Safe Method

The installation process is simple and cheap. An optical fiber supplies the infrared radiations. These are directed with the help of light antennas fixed into the ceiling at several points. The antennas train the infrared rays precisely in required directions. No moving parts are involved in the set up and no power is required.

Each antenna has two gratings. These send out radiations of different wavelengths at a wide range of angles. The wavelength range of the infrared radiations are safe for the eye and do not cause retinal damage. The gratings can manage several infrared rays and devices at the same time.

No Overloading

The Wi-Fi is accessible to the mobile device user at almost any place in the building or facility. If the tablet or smartphone moves out of the range covered by an antenna, another antenna in the network takes over. Any number of devices can be added to the network without compromising on the speed of data transfer.

Each device is assigned a specific wavelength by the antenna so that it does not have to share its capacity with another gadget.

The research showed that the speed of the new Wi-Fi signal is 42.8 Gigabits per second over a range of 2.5 meters. This is a vast improvement over current data speeds in the Netherlands, which is around 17.6 megabits per seconds.

Until now, the Eindhoven researchers have used the infrared radiations for downloading purposes only. For uploading data, which requires lower capacity, radio signals have been used.

No Interference with Neighboring Signals

The radiations sent out by these antennas have wavelengths 1500 nanometers and larger. Existing Wi-Fi networks give out radiations with much smaller
wavelengths. The wide difference in wavelengths would not allow the Eindhoven signals to interfere with those issuing from other Wi-Fi sources in the vicinity.

The work on infrared-based Wi-Fi is a part of a project termed BROWSE headed by Ton Koonen. Other research teams are working on related fields like the fiber optic technology, which is making available the radiations and connecting all the antennas in the network.

Koonen expects that all the aspects of the technology will be ready in five years to present what may be known as indoor optical wireless network to homes, offices and other establishments.
Add a comment...

Electronics in the GI Tract Draws Power from Wireless Sources

Electronic devices placed within the gastrointestinal tract for detection and cure of medical conditions require power for their operation. Conventional power sources like batteries placed within the tract can be inconvenient and incompatible with the delicate inner lining of the intestines. Moreover, their short lifespan makes them unsuitable for extended use.

Researchers at Brigham and Women's Hospital, MIT and The Charles Stark Draper Lab have conceived of a more effective solution in the form of an ingestible capsule with an antenna that can receive radio signals wirelessly. The device can supply power to a device placed in the gastrointestinal tract for the purpose of detecting and treating diseases. The researchers have published the details of their study in Scientific Reports.

The health industry is making rapid advances in the development of futuristic detection and therapeutic tools like gastric pacemakers and imagers placed in the GI tract. The MIT researchers believe that the wireless power sources would be particularly effective for powering this category of tools.

Carlo Gio Traverso, MD, one of the authors of the article has described how wireless signals can function as a safe power source for devices placed in the GI tract, making them a viable option for detection and treatment of certain kinds of ailments not possible earlier. Traverso is a biomedical engineer and a gastroenterologist at Brigham and Women's Hospital.

Certain other devices placed within the human body like neural probes and cochlear implants derive their power from a technique called near-field coupling. This method is not very suitable for ingestible devices as they are too small and moreover, they lie at a considerable distance from the body surface to avail of the power.

Mid-Field Coupling Technique

The researchers resorted to a new technique for powering these ingestible devices. Called the mid-field coupling technique, it runs on higher frequencies than near-field coupling does and allows deeply implanted devices to draw on the power wirelessly. The efficiency of power transmission has been found to be twice or thrice of that of near-field coupling method.

A crucial aspect that the researchers had to contend with is that they needed antennas, which could function effectively in human tissues. They designed several antennas and positioned them strategically inside various parts of the intestine of a swine model like the esophagus, stomach, and colon. They kept one antenna outside the model for transmission purposes. Tests showed that 37.5 µW, 123 µW and 173 µW of power could be transmitted to these antennas placed internally.
These power levels are quite adequate for running the electronic devices inserted for detection and treatment purposes.

The scientists now aim to refine their techniques to make for improved results. They plan to study the effects of their findings by adjusting factors like animal size, depth of the antennas and their orientations. Furthermore, they are looking at ways to increase the transmission efficiency by studying how power travels through the propagating fields due to the signals.

The National Institutes of Health has funded the above work. In addition, the scientists were granted a Draper Fellowship.
Add a comment...
Wait while more posts are being loaded