Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

January 30 2018

Repairs You Can Print: Broken Glue Gun Triggers Replacement

Picture this: you need to buy a simple tool like a glue gun. There’s usually not a whole lot going on in that particular piece of technology, so you base your decision on the power rating and whether it looks like it will last. And it does last, at least for a few years—just long enough to grow attached to it and get upset when it breaks. Sound familiar?

[pixelk] bought a glue gun a few years ago for its power rating and its claims of strength. Lo and behold, the trigger mechanism has proven to be weak around the screws. The part that pushes the glue stick into the hot end snapped in two.

It didn’t take much to create a replacement. [pixelk] got most of the measurements with calipers and then got to work in OpenSCAD. After printing a few iterations, it fit well enough, but [pixelk] saw a chance to improve on the original design and added a few teeth where the part touches the glue stick. The new part has been going strong for three months.

We think this entry into our Repairs You Can Print contest is a perfect example of the everyday utility of 3D printers. Small reproducible plastic parts are all around us, just waiting to fail. The ability to not only replace them but to improve on them is one of the brightest sides of our increasingly disposable culture.

Still haven’t found a glue gun you can stick to? Try building your own.

Making the Case for Open Source Medical Devices

Engineering for medical, automotive, and aerospace is highly regulated. It’s not difficult to see why: lives are often at stake when devices in these fields fail. The cost of certifying and working within established regulations is not insignificant and this is likely the main reason we don’t see a lot of work on Open Hardware in these areas.

Ashwin K. Whitchurch wants to change this and see the introduction of simple but important Open Source medical devices for those who will benefit the most from them. His talk at the Hackaday Superconference explores the possible benefits of Open Medical devices and the challenges that need to be solved for success.

Ashwin discusses a sobering statistic from the World Heath Organization to start off his presentation: about 90% of the world’s investment in medical research benefits only the most affluent 10% of its population. Called the 10/90 gap, this statistic is debated by some, but we think all can agree that applying science and technology to help the sick — no matter their position in life — is a virtue. How can we focus our Open Hardware movement to make advances in medical care available for more people?

We’re delighted that a few of Ashwin’s products which try to address this need were entries in the 2017 Hackaday Prize. His HealthyPi V3 claimed 2nd place and is a patient monitor that records ECG, respiration, pulse-ox, skin temp, and blood pressure. It’s a “hat” for a Raspberry Pi and can be run with or without a screen for the readout. His HeartyPatch project was a Best Product finalist. Based on an ESP8266, it is a wearable single-lead ECG monitor.

These two are interesting products to compare to devices you would find in hospitals in high-income countries. FDA approved patient monitors will cost between $2,000 and $10,000. There are unbranded machines available on markets like AliExpress which cost between $200 and $1,000 but these do not come with certifications and they’re not open source — when they need to be calibrated or repaired what are your options? An ideal Open Source solution would be independently certifiable and calibrated by the care giving institution since proper documentation on doing so would exist. And there is another cost benefit: they can utilize generic consumables, items that can be very expensive if locked into one manufacturer’s brand.

HealthyPI V3 “hat” board shown in front of the version that includes a display.

Ashwin mentions that his devices are using the same ICs that are often found in the certified gear. For the patient monitor that’s the AFE4400 for heart rate and pulse oximetry and the ADS1292R multichannel ADC for respiration and ECG. With these silicon solutions available to Open Hardware developers, the concerns for safety and responsible engineering become a matter of established design and verification.

The biggest need for low-cost medical equipment is in places that also have a shortage of specialized medical practitioners. Ashwin envisions low-cost fetal heart monitoring devices for low-income countries where an alarming number of fetal deaths occur during labor. He suggests a device with a user interface simplified for midwife or non-medical birth helpers could do something as simple as indicate that something is normal, or not normal in which case having the mother reposition herself could make the difference.

The challenges here are many, and we’ve moved rather hastily through a lot of the topics Ashwin discusses so make sure you set aside some time to watch his talk. He sees a need for a few things to make Open Source medical devices possible. There must be buy-in from the medical and engineering communities. The products need to be made usable by those without advanced medical degrees and safeguards against misdiagnosis from false positives and negatives need to be addressed. Perhaps the biggest hurdle is to reconcile certification and regulation standards with a new breed of devices not meant to replace what we have, but to fill a need currently not addressed.

If these barriers can be overcome, we will see these devices which are currently developer-grade become consumer-grade and lead to a better quality of care for a large part of the world’s population.

Jill Tarter: Searching for E.T.

What must it be like to devote your life to answering a single simple but monumental question: Are we alone? Astronomer Jill Tarter would know better than most what it’s like, and knows that the answer will remain firmly stuck on “Yes” until she and others in the Search for Extraterrestrial Intelligence project (SETI) prove it otherwise. But the path she chose to get there was an unconventional as it was difficult, and holds lessons in the power of keeping you head down and plowing ahead, no matter what.

Endless Hurdles

To get to the point where she could begin to answer the fundamental question of the uniqueness of life, Jill had to pass a gauntlet of obstacles that by now are familiar features of the biography of many women in science and engineering. Born in 1944, Jill Cornell grew up in that postwar period of hope and optimism in the USA where anything seemed possible as long as one stayed within established boundaries. Girls were expected to do girl things, and boys did boy things. Thus, Jill, an only child whose father did traditional boy things like hunting and fixing things with her, found it completely natural to sign up for shop class when she reached high school age. She was surprised and disappointed to be turned down and told to enroll in “Home Economics” class like the other girls.

Doing “boy things” with Dad. Source: SETI Institute

She eventually made it to shop class, but faced similar obstacles when she wanted to take physics and calculus classes. Her guidance counselor couldn’t figure why a girl would need to take such classes, but Jill persisted and excelled enough to get accepted to Cornell, the university founded by her distant relation, Ezra Cornell. Jill applied for a scholarship available to Cornell family members; she was turned down because it was intended for male relatives only.

Undeterred, Jill applied for and won a scholarship from Procter & Gamble for engineering, and entered the engineering program as the only woman in a class of 300. Jill used her unique position to her advantage; knowing that she couldn’t blend into the crowd like her male colleagues, she made sure her professors always knew who she was. Even still, Jill faced problems. Cornell was very protective of their students in those days, or at least the women; they were locked in their dorms at 10:00 each night. This stifled her ability to work on projects with the male students and caused teamwork problems later in her career.

No Skill is Obsolete

Despite these obstacles, Jill, by then married to physics student Bruce Tarter, finished her degree. But engineering had begun to bore her, so she changed fields to astrophysics for her post-graduate work and moved across the country to Berkeley. The early 70s were hugely inspirational times for anyone with an eye to the heavens, with the successes of the US space program and leaps in the technology available for studies the universe. In this environment, Jill figured she’d be a natural for the astronaut corps, but was denied due to her recent divorce.

Disappointed, Jill was about to start a research job at NASA when X-ray astronomer Stu Boyer asked her to join a ragtag team assembled to search for signs of intelligent life in the universe. Lacking a budget, Boyer had scrounged an obsolete PDP-8 from Berkeley and knew that Jill was the only person who still knew how to program the machine. Jill’s natural tendency to fix and build things began to pay dividends, and she would work on nothing but SETI for the rest of her career.

From the Bureaucratic Ashes

At Arecibo. Source: KQED Science

SETI efforts have been generally poorly funded over the years. Early projects were looked at derisively by some scientists as science fiction nonsense, and bureaucrats holding the purse strings rarely passed up an opportunity to score points with constituents by ridiculing efforts to talk to “little green men.” Jill was in the thick of the battles for funding, and SETI managed to survive. In 1984, Jill was one of the founding members of the SETI Institute, a private corporation created to continue SETI research for NASA as economically as possible.

The SETI Institute kept searching the skies for the next decade, developing bigger and better technology to analyze data from thousands of frequencies at a time from radio telescopes around the world. But in 1993, the bureaucrats finally landed the fatal blow and removed SETI funding from NASA’s budget, saving taxpayers a paltry $10 million. Jill and the other scientists kept going, and within a year, the SETI Institute had raised millions in private funds, mostly from Silicon Valley entrepreneurs, to continue their work.

Part of the Allen Telescope Array. Source: SETI Institute

The Institute’s Project Phoenix, of which Jill was Director until 1999, kept searching for signs of life out there until 2004, with no results. They proposed an ambitious project to improve the odds — an array of 350 radio telescopes dedicated to SETI work. Dubbed the Allen Telescope Array after its primary patron, Microsoft co-founder Paul Allen, the array has sadly never been completed. But the first 42 of the 6-meter dishes have been built, and the ATA continues to run SETI experiments every day.

Jill Tarter retired as Director of SETI Research for the Institute in 2012, but remains active in the SETI field. Her primary focus now is fundraising, leveraging not only her years of contacts in the SETI community but also some of the star power she earned when it became known that she was the inspiration for the Ellie Arroway character in Carl Sagan’s novel Contact, played by Jodie Foster in the subsequent Hollywood film.

Without a reasonable SETI program, the answer to “Are we alone?” will probably never be known. But if it is answered, it’ll be thanks in no small part to Jill Tarter and her stubborn refusal to stay within the bounds that were set for her.

Roll Your Own Magnetic Encoder Disks

[Erich] is the middle of building a new competition sumo bot for 2018. He’s trying to make this one as open and low-cost as humanly possible. So far it’s going pretty well, and the quest to make DIY parts has presented fodder for how-to posts along the way.

One of new bot’s features will be magnetic position encoders for the wheels. In the past, [Erich] has used the encoder disks that Pololu sells without issue. At 69¢ each, they don’t exactly break the bank, either. But shipping outside the US is prohibitively high, so he decided to try making his own disks with a 3D printer and the smallest neodymium magnets on Earth.

The pre-fab encoder disks don’t have individual magnets—they’re just a puck of magnetic slurry that gets its polarity on the assembly line. [Erich] reverse-engineered a disk and found the polarity using magnets (natch). Then got to work designing a replacement with cavities to hold six 1mm x 1mm x 1mm neodymium magnets and printed it out. After that, he just had to glue them in place, matching the polarity of the original disk. We love the ingenuity of this project, especially the pair of tweezers he printed to pick and place the magnets.

Rotary encoders are pretty common in robotics applications to detect and measure wheel movement. Don’t quite recall how they work? We’ll help you get those wheels turning.

via Dangerous Prototypes

Making the Case for Slackware in 2018

If you started using GNU/Linux in the last 10 years or so, there’s a very good chance your first distribution was Ubuntu. But despite what you may have heard on some of the elitist Linux message boards and communities out there, there’s nothing wrong with that. The most important thing is simply that you’re using Free and Open Source Software (FOSS). The how and why is less critical, and in the end really boils down to personal preference. If you would rather take the “easy” route, who is anyone else to judge?

Having said that, such options have not always been available. When I first started using Linux full time, the big news was that the kernel was about to get support for USB Mass Storage devices. I don’t mean like a particular Mass Storage device either, I mean the actual concept of it. Before that point, USB on Linux was mainly just used for mice and keyboards. So while I might not be able to claim the same Linux Greybeard status as the folks who installed via floppies on an i386, it’s safe to say I missed the era of “easy” Linux by a wide margin.

But I don’t envy those who made the switch under slightly rosier circumstances. Quite the opposite. I believe my understanding of the core Unix/Linux philosophy is much stronger because I had to “tough it” through the early days. When pursuits such as mastering your init system and compiling a vanilla kernel from source weren’t considered nerdy extravagance but necessary aspects of running a reliable system.

So what should you do if you’re looking for the “classic” Linux experience? Where automatic configuration is a dirty word, and every aspect of your system can be manipulated with nothing more exotic than a text editor? It just so happens there is a distribution of Linux that has largely gone unchanged for the last couple of decades: Slackware. Let’s take a look at its origins, and what I think is a very bright future.

A Deliberate Time Capsule

It’s not as if it’s an accident that Slackware is the most “old school” of all Linux distributions. For one, it’s literally the oldest actively maintained distribution at 24 years. But more to the point, Slackware creator and lead developer Patrick Volkerding simply likes it that way:

The Official Release of Slackware Linux by Patrick Volkerding is an advanced Linux operating system, designed with the twin goals of ease of use and stability as top priorities. Including the latest popular software while retaining a sense of tradition, providing simplicity and ease of use alongside flexibility and power, Slackware brings the best of all worlds to the table.

Slackware.com

For those of you not up on your Linux distribution buzz-words “stability” and “tradition” could here be taken to mean “old” and “older”.  Cutting edge software and features are generally avoided in Slackware, which is either a blessing or a curse depending on who you ask. A full install of the latest build of Slackware could potentially have software months or years out of date, but it will definitely have software that works.

The upside of all this is that things more or less stay the same in Slackware-land. If you used Slackware 9.0 in 2003, you’ll have no problem installing Slackware 14.2 today and finding your way around.

Benefits of Simplicity

The Slackware installer has remained nearly unchanged since the 1990’s.

If you’re looking to learn Linux, there’s great benefit in Slackware’s almost fanatical insistence on simplicity. Rather than learning a distribution-specific method of accomplishing a task (a common occurrence in highly developed distributions like Ubuntu), the “Slackware way” is likely to be applicable to any other Linux distribution you use. For that matter, much of what works in Slackware will also work in BSD and other Unix variants.

This is especially true of the Slackware initialization system, which is closely related to the BSD init style. Services are controlled with simple Bash scripts (rc.wireless, rc.samba, rc.httpd, etc) dropped into /etc/rc.d/. To enable and disable a service you don’t need to remember any distribution-specific command, just add or remove the executable bit from the script with chmod. Adding and removing services is extremely simple in Slackware, making it easy to set up a slimmed-down install for a very specific purpose or for older hardware.

Speaking of keeping things simple, the controversial systemd is nowhere to be found. In Slackware, the text file is still King, and any software that obfuscates system configuration and maintenance is likely to have a very tough time getting the nod from Volkerding and his close-knit team of maintainers.

Finally, one of the best features of Slackware is the avoidance of custom or “patched” versions of software. Slackware does not apply patches to any of the software in its package repository, nor the kernel. While other distributions might make slight changes or tweaks to the software they install in an attempt to better brand or integrate it into the OS as a whole, Slackware keeps software exactly as the original developer intended it to be. Not only does this reduce the chances of introducing bugs or compatibility issues, but there’s also something nice about knowing that you’re using the software exactly as the developer intended it.

Frustration Free Packaging

If you’ve heard anything bad about Slackware, it’s almost certainly been about the software packages. Or more specifically, the lack of intelligent dependency management. In other distributions, the package manager understands what software each package relies on to function, and will prompt you to install them as well to make sure everything works as expected. There is no such system in Slackware, but that is also by design.

In an effort to make things as simple as possible, the expectation is that you install everything. Slackware is developed and tested with the assumption that you have a full installation of every package in the repository. In fact, this is the default mode for the Slackware installer; you have to switch into “Expert” mode if you don’t want everything.

If you don’t want a full install and would rather pick and chose packages, you are free to do so, but you’ll need to manually handle dependencies. If you get an error about a missing library when you try to start a program, it’s up to you to find out what it depends on and install it. You’ll quickly develop a feel for just what is and isn’t required in a Linux system by going through and manually solving your own dependencies, which again comes in handy if you are trying to tailor-fit an OS to your specific requirements.

Should You be Slacking?

Today, the argument for Slackware might actually be stronger than its been in the past. By not embracing the switch over to systemd, Slackware is seeing more attention than it has in years. It’s still unclear if it can avoid systemd forever, but at least for the foreseeable future, Linux users who aren’t onboard with this controversial shift in the Linux ecosystem have found a safe haven in Slackware.

Slackware was my first Linux distribution, and today I still recommend it to anyone who’s looking to really learn Linux. If you simply want to use Linux, then I have to concede Ubuntu or Mint is probably a better starting point for a Windows-convert. It’s the difference between learning how and why your operating system works, and having the OS leave you alone so you can work on something else. Not everyone needs to learn the former, but it may help you in the future if the latter falls on its face.

Be the Electronic Chameleon

If you want to work with wearables, you have to pay a little more attention to color. It is one thing to have a 3D printer board colored green or purple with lots of different color components onboard. But if it is something people will wear, they are going to be more choosy. [Sdekon] shows us his technique of using Leuco dye to create items that change color electrically. Well, technically, the dye is heat-sensitive, but it is easy to convert electricity to heat. You can see the final result in the video, below.

The electronics here isn’t a big deal — just some nichrome wire. But the textile art processes are well worth a read. Using a piece of pantyhose as a silk screen, he uses ModPodge to mask the screen. Then he weaves nichrome wire with regular yarn to create a heatable fabric. Don’t have a loom for weaving? No problem. Just make one out of cardboard. There’s even a technique called couching, so there’s lots of variety in the textile arts used to create the project.

We get that this is just an example, but we’d love to see a more practical use. Maybe a camera and OpenCV could create smart camouflage, for example.  We had to wonder how big you could make RGB “pixels” and still have some effective use as a crude display.

Adding this to OLED-impregnated fabric could be interesting. If you want to know more about using sewing in projects, we have just the post for you.

 

 

Flashing Light Prize 2018: This Time with Neon

The Flashing Light Prize is back this year with a noble twist. And judging from the small set of entries thus far, this is going to be an interesting challenge.

Last year’s Flashing Light Prize was an informal contest with a simple goal: flash an incandescent lamp in the most interesting way possible. This year’s rules are essentially the same as last year, specifying mainly that the bulb itself has to light up — no mechanical shutters — and that it has to flash at 1 Hz with a 50% duty cycle for at least five minutes. But where last year’s contest specified incandescent lamps, this year you’ve got to find a way to flash something with neon in it. It could be an off-the-shelf neon pilot light, a recycled neon sign, or even the beloved Nixie tube. But we suspect that points will be awarded for extreme creativity, so it pays to push the envelope. Last year’s winner used a Wimhurst machine to supply the secondary of an ignition coil and flash a pair of bulbs connected across the primary, so the more Rube Goldberg-esque, the better your chances.

There are only a handful of entries right now, with our favorite being [Ben Krasnow]’s mashup of electricity, mechanics, chemistry, and physics. You’ve got until March 15th to post your flashing neon creation, and there are two categories this year, each with a £200 prize. Get your flash on and win this one for Hackaday.

The Engineering Case for Fusing Your LED Strips

Modern LED strips are magical things. The WS2812 has allowed the quick and easy creation of addressable RGB installations, revolutionizing the science of cool glowy things. However, this accessibility means that it’s easy to get in over your head and make some simple mistakes that could end catastrophically. [Thomas] is here to help, outlining a common mistake made when building with LED strips that is really rather dangerous.

The problem is the combination of hardware typically used to run these LED strings. They’re quite bright and draw significant amounts of power, each pixel drawing up to 60 mA at full-white. In a string of just 10 pixels, the strip is already drawing 600 mA. For this reason, it’s common for people to choose quite hefty power supplies that can readily deliver several amps to run these installations.

It’s here that the problem starts. Typically, wires used to hook up the LED strips are quite thin and the flex strips themselves have a significant resistance, too. This means it’s possible to short circuit an LED strip without actually tripping the overcurrent protection on something like an ATX power supply, which may be fused at well over 10 amps. With the resistance of the wires and strip acting as a current limiter, the strip can overheat to the point of catching fire while the power supply happily continues to pump in the juice. In a home workshop under careful supervision, this may be a manageable risk. In an unattended installation, things could be far worse.

Thankfully, the solution is simple. By installing an appropriately rated fuse for the number of LEDs in the circuit, the installation becomes safer, as the fuse will burn out under a short circuit condition even if the power supply is happy to supply the current. With the example of 10 LEDs drawing 600 mA, a 1 amp fuse would do just fine to protect the circuit in the event of an accidental short.

It’s a great explanation of a common yet dangerous problem, and [Thomas] backs it up by using a thermal camera to illustrate just how hot things can get in mere seconds. Armed with this knowledge, you can now safely play with LEDs instead of fire. But now that you’re feeling confident, why not check out these eyeball-searing 3 watt addressable LEDs?

 

 

Looking Back at Microsoft Bob

Every industry has at least one. Automobiles had the Edsel. PC Hardware had the IBM PCJr and the Microchannel bus. In the software world, there’s Bob. If you don’t remember him, Bob was Microsoft’s 1995 answer to why computers were so darn hard to use. [LGR] gives us a nostalgic look back at Bob and concludes that we hardly knew him.

Bob altered your desktop to be a house instead of a desk. He also had helpers including the infamous talking paper clip that suffered slings and arrows inside Microsoft Office long after Bob had been put to rest.

Microsoft had big plans for Bob. There was a magazine and add-on software (apparently there was only one title released). Of course, if you want to install Bob yourself, you’ll need to boot Windows 3.1 — this is 1995, remember.

To log in you had to knock on the big red door and then tell the helpful dog all your personal information. Each user had a private room and all users would share other rooms.

We like to feature retrocomputing of the great old computers of our youth. This is kind of the anti-example of this. Bob was a major fail. PC World awarded it 7th place in the 25 worst tech products of all time and CNet called it the number one worst product of the decade.

Once you’ve had enough of 1995 failed software, you can always read up on some more successful Z80 clones. Or you can further back in the way back machine and see what user interfaces were like in the 1960s and 1970s.

New Part Day: I2C In, Charlieplexed LEDs Out

It seems that most of the electrical engineering covered on Hackaday concerns exactly one problem domain: how to blink a bunch of LEDs furiously. There are plenty of LED drivers out there, but one of the more interesting in recent memory came from ISSI in the form of a chip that turns I2C into a Charlieplexed LED array. You may have seen this chip — the IS31FL3731 — in the form of an Adafruit LED matrix and some stupid thing some idiot made, but with it you’re only ever going to get 144 LEDs in an array, not enough if you want real blinky bling.

Now ISSI has released a more capable chip that turns I2C into many more Charlieplexed LEDs. The IS31FL3741 will drive up to 351 LEDs in a 39×9 matrix, or if you’re really clever, an 18×18 single color LED matrix.

Features of this chip include reverse/short detection for each individual LED, 8-bit PWM, dimming functions, a de-ghosting feature that guarantees a LED is either on or off, a configurable row/column matrix, and a few other handy tools that you would like to see in a LED matrix driver chip. The most impressive chip in this series will be available for under $2/piece in quantities of 2500, although unlike the IS31FL3731, it appears this new chip will only be available in a QFN package.

Speaking from experience, this is a really great chip for driving a whole boatload of LEDs, provided you have a pick and place machine. Yes, you can hand-solder a QFN and several hundred 0402 LEDs, but I wouldn’t recommend it. I really, really wouldn’t recommend it. That said, this is the perfect chip for maximum blinky bling, and the press material from ISSI gives us the great idea of using one of these chips as the backlight controller for RGB LED mechanical keyboards. That’s a great application, and the chip is pretty cheap, too.

You can check out ISSI’s blinky demo video of this chip below.

January 29 2018

Repairs You Can Print: Take a deep breath thanks to a 3D printed fume extractor

If you are a maker, chances are that you will be exposed to unhealthy fumes at some point during your ventures. Whether they involve soldering, treating wood, laser cutting, or 3D printing, it is in your best interest to do so in a well ventilated environment. What seems like sound advice in theory though is unfortunately not always a given in practice — in many cases, the workspace simply lacks the possibility, especially for hobbyists tinkering in their homes. In other cases, the air circulation is adequate, but the extraction itself could be more efficient by drawing out the fumes right where they occur. The latter was the case for [Zander] when he decided to build his own flexible hose fume extractor that he intends to use for anything from soldering to chemistry experiments.

Built around not much more than an AC fan, flex duct, and activated carbon, [Zander] designed and 3D printed all other required parts that turns it into an extractor. Equipped with a pre-filter to hold back all bigger particles before they hit the fan, the air flow is guided either through the active carbon filter, or attached to another flex duct for further venting. You can see more details of his build and how it works in the video after the break.

Workspace safety is often still overlooked by hobbyists, but improved air circulation doesn’t even need to be that complex for starters. There’s also more to read about fumes and other hazardous particles in a maker environment, and how to handle them.

3D Printed Battery Pack Keeps Old Drill Spinning

The greatest enemy of proprietary hardware and components is time. Eventually, that little adapter cable or oddball battery pack isn’t going to be available anymore, and you’re stuck with a device that you can’t use. That’s precisely what happened to [Larry G] when the now antiquated 7.2V NiCd batteries used by his cordless drill became too hard to track down. The drill was still in great shape and worked fine, but he couldn’t power the thing. Rather than toss a working tool, he decided to 3D print his own battery pack.

The 3D modeling on the battery pack is impeccable

He could have just swapped new cells into his old pack, but if you’re going to go through all that trouble, why not improve on things a little? Rather than the NiCd batteries used by the original pack, this new pack is designed around readily available AA NiMH batteries. For the light repairs and craft work he usually gets himself into, he figures these batteries should be fine. Plus he already had them on hand, and as we all know, that’s half the battle when putting a project together.

Interestingly, the original battery pack was wired in such a way that it provided two voltages. In older tools such as this one, this would be used for rudimentary speed control. Depending on which speed setting the drill is on, it would either connect to 4 or 6 cells in the original pack. [Larry] didn’t want to get involved with the extra wiring and never used the dual speeds anyway, so his pack only offers the maximum speed setting. Though he does mention that it may be possible to do PWM speed control in the battery itself via a 555 timer if he feels like revisiting the project.

[Larry] tells us the pack itself was rendered completely from scratch, using only the original battery pack and trial-and-error to get the fit perfect. He reused the side-mounted release buttons to save time, but otherwise everything is 3D printed in PETG for its strength and chemical resistance.


This is an entry in Hackaday’s

Repairs You Can Print contest

The twenty best projects will receive $100 in Tindie credit, and for the best projects by a Student or Organization, we’ve got two brand-new Prusa i3 MK3 printers. With a printer like that, you’ll be breaking stuff around the house just to have an excuse to make replacement parts.

Smaller and Smarter: The Electron Rocket Takes Flight

On January 21st, 2018 at 1:43 GMT, Rocket Lab’s Electron rocket lifted off from New Zealand’s Mahia Peninsula. Roughly eight minutes later ground control received confirmation that the vehicle entered into a good orbit, followed shortly by the successful deployment of the payload. On only their second attempt, Rocket Lab had become the latest private company to put a payload into orbit. An impressive accomplishment, but even more so when you realize that the Electron is like no other rocket that’s ever flown before.

Not that you could tell from the outside. If anything, the external appearance of the Electron might be called boring. Perhaps even derivative, if you’re feeling less generous. It has the same fin-less blunted cylinder shape of most modern rockets, a wholly sensible (if visually unexciting) design. The vehicle’s nine first stage engines would have been noteworthy 15 years ago, but today only serve to draw comparisons with SpaceX’s wildly successful Falcon 9.

But while the Electron’s outward appearance is about as unassuming as they come, under that jet-black outer skin is some of the most revolutionary rocket technology seen since the V-2 first proved practical liquid fueled rockets were possible. As impressive as its been watching SpaceX teach a rocket to fly backwards and land on its tail, their core technology is still largely the same as what took humanity to the Moon in the 1960’s.

Vehicles that fundimentally change the established rules of spaceflight are, as you might expect, fairly rare. They often have a tendency to go up in a ball of flames; figuratively if not always literally. Now that the Electron has reached space and delivered its first payload, there’s no longer a question if the technology is viable or not. But whether anyone but Rocket Lab will embrace all the changes introduced with Electron may end up getting decided by the free market.

A Tiny Rocket for a Growing Market

The first thing to understand about Electron is that it’s incredibly small and light for an orbital rocket. To put it into perspective, the Space Shuttle could have carried two fully fueled Electron rockets in its cargo bay without breaking a sweat. Accordingly, the Electron has an extremely low cargo capacity, topping out at around 500 lb. Compared to the Falcon 9’s maximum capacity of roughly 50,000 lb, one might wonder what the point is.

Rocket Lab CEO Peter Beck poses with Electron

The point, of course, is the cost. A launch on Falcon 9 costs the customer around $62 M, while a trip to space on Electron is less than $6 M. If you’ve got a payload light enough to hitch a ride on an Electron, the choice is obvious. As satellites get smaller and lighter, more and more payloads will be able to fit into this category. In fact, Rocket Lab hopes to be launching as many as 100 Electron rockets per year to meet the anticipated demand.

Pound-for-pound, it’s actually much cheaper to fly on Falcon 9. But a lightweight payload on Falcon 9 will be relegated to secondary cargo. The realities of this arrangement were demonstrated in 2012, when one of the Falcon 9’s engines failed on ascent. This only left enough power to accomplish the primary mission, delivering supplies and cargo to the International Space Station. The secondary payload, a satellite from communications provider Orbcomm, had to be left behind. At only 379 lb, Orbcomm’s satellite could have been a perfect fit for a dedicated Electron launch.

A New(er) Way To Build Rockets

Electron isn’t cheap just because it’s small, the price is also driven down by the state-of-the-art construction techniques being used throughout the vehicle. The combustion chamber, injectors, pumps, and valves of each of the Electron’s ten Rutherford engines is 3D printed via electron-beam melting in as little as 24 hours. This is a first in rocketry, and beats NASA and SpaceX to the punch by years. SpaceX won’t be flying their 3D printed engine until their “Dragon 2” capsule flies later this year, and NASA is still in the early stages of their research.

In another first, Rocket Lab has built nearly the entire rocket out of a carbon composite. This gives the rocket its deep black color, but more importantly, a dry weight that Rocket Lab’s CEO Peter Beck says is “less than a Mini Cooper. Critically, even the fuel and oxidizer tanks are made of carbon composite instead of the traditional aluminum. Electron is the first rocket to successfully fly with carbon composite tanks, but it certainly isn’t the first one to try.

In 2001, NASA famously canceled the Lockheed Martin X-33 spaceplane, a potential replacement for the Space Shuttle, in large part because they determined that its composite propellant tanks were simply beyond the technology of the time.

The Battery Powered Rocket

But the crowning achievement of the Electron isn’t how small it is, or how fast its engines can be 3D printed. Those are impressive feats in their own right, but arguably just extensions of work that’s been going on for years. They were eventualities that Rocket Lab were able to capitalize on, at least in part, because they have such a tiny vehicle.

A simplified liquid fuel rocket engine with preburner. Credit: Duk

The true revolution is the fact that Rocket Lab has completely done away with the complicated preburner and turbine traditionally used in liquid fuel rockets. Rocket engines consume an immense amount of fuel and oxidizer, and powerful pumps are required to get the propellants injected into the combustion chamber at the necessary pressure. To power these pumps, most engines have a turbine which is spun by what’s known as a preburner. In some cases the preburner uses the same fuel as the rocket engine itself, but can have its own fuel supply with associated plumbing and tanks.

The preburner, turbine, and pumps make up a powerful and complicated system that in some ways is just as difficult to master as the rocket engine itself. Consider that the turbine in each one of the F-1 engines used in the Saturn V developed 55,000 horsepower alone.

In the Rutherford engine, this entire system is replaced with two 50 horsepower brushless motors powered by a bank of lithium polymer batteries. These motors power the pumps directly, and give a level of control over engine operation that would be difficult to match with traditional techniques. With a turbine, spin-up time is directly correlated to throttle response and the engine startup sequence. But by using electrically driven pumps, Electron’s engines are able to respond faster and more accurately to commands from the flight computer.

The downside is that batteries are heavy, and unlike liquid fuel, don’t get consumed while being used. A dead lithium battery is just as heavy as a fully charged one. To combat this, the Electron actually dumps the dead batteries overboard as the vehicle climbs.

This Changes Everything, Right?

The engineering that Rocket Lab has done on Electron and the fact they made orbit on only their second attempt with such a wildly unconventional vehicle is an incredible achievement. There’s no question the Electron itself will be looked back on as a milestone in the history of rocketry.

But while 3D-printed engines and carbon composite propellant tanks are pretty much a sure bet for future generations of rockets, Electron’s engine technology might be looking at a much shorter life. There’s simply no getting around the fact that liquid fuels have a much greater energy density than batteries. While Rocket Lab has managed to find a workable combination of battery weight versus payload capacity in this specific vehicle, the equation just doesn’t work as you scale up the design. At some point, the weight of the batteries simply becomes too great to remain viable.

If Rocket Lab is right, and there’s a huge market for lightweight payloads, then we may see other small rockets adopt a similar engine. But if the market is content getting to space in the second or third class seats of larger rockets like Falcon 9, this innovative technology may end up taking the back seat itself for economic reasons.

<!--more-->

Spiral Laser Cut Buttons Make A Super-Slim USB MIDI Board

We see a huge variety of human-computer interface devices here at Hackaday, and among them are some exceptionally elegant designs. Of those that use key switches though, the vast majority employ off the shelf components made for commercial keyboards or similar. It makes sense to do this, there are some extremely high quality ones to be had.

Sometimes though we are shown designs that go all the way in creating their key switches from the ground up. Such an example comes from [Brandon Rice], and it a particularly clever button design because of its use of laser cutting to achieve a super-slim result. He’s made a sandwich of plywood with the key mechanisms formed in a spiral cut on the top layer. He’s a little sketchy on the exact details of the next layer, but underneath appears to be a plywood spacer surrounding a silicone membrane with conductive rubber taken from a commercial keyboard. Beneath that is copper tape on the bottom layer cut to an interweaving finger design for the contacts. An Adafruit Trinket Pro provides the brains and a USB interface, and the whole device makes for an attractive and professional looking peripheral.

You can see the results in action as he’s posted a video, which we’ve included below the break.

We’ve shown you spiral structures for flexibility in the past, with flexible materials made via 3D printing.

Inventing The Microprocessor: The Intel 4004

We recently looked at the origins of the integrated circuit (IC) and the calculator, which was the IC’s first killer app, but a surprise twist is that the calculator played a big part in the invention of the next world-changing marvel, the microprocessor.

There is some dispute as to which company invented the microprocessor, and we’ll talk about that further down. But who invented the first commercially available microprocessor? That honor goes to Intel for the 4004.

Path To The 4004

Busicom calculator motherboard based on 4004 (center) and the calculator (right) Busicom calculator motherboard based on 4004 (center) and the calculator (right)

We pick up the tale with Robert Noyce, who had co-invented the IC while at Fairchild Semiconductor. In July 1968 he left Fairchild to co-found Intel for the purpose of manufacturing semiconductor memory chips.

While Intel was still a new startup living off of their initial $3 million in financing, and before they had a semiconductor memory product, as many start-ups do to survive they took on custom work. In April 1969, Japanese company Busicom hired them to do LSI (Large-Scale Integration) work for a family of calculators.

Busicom’s design, consisting of twelve interlinked chips, was considered a complicated one. For example, it included shift-register memory, a serial type of memory which complicates the control logic. It also used Binary Coded Decimal (BCD) arithmetic. Marcian Edward Hoff Jr — known as “Ted”, head of the Intel’s Application Research Department, felt that the design was even more complicated than a general purpose computer like the PDP-8, which had a fairly simple architecture. He felt they may not be able to meet the cost targets and so Noyce gave Hoff the go-ahead to look for ways to simplify it.

Hoff realized that one major simplification would be to replace hard-wired logic with software. He also knew that scanning a shift register would take around 100 microseconds whereas the equivalent with DRAM would take one or two microseconds. In October 1969, Hoff came up with a formal proposal for a 4-bit machine which was agreed to by Busicom.

This became the MCS-4 (Micro Computer System) project. Hoff and Stanley Mazor, also of Intel, and with help from Busicom’s Masatoshi Shima, came up with the architecture for the MCS-4 4-bit chipset which consisted of four chips:

  • 4001: 2048-bit ROM with a 4-bit programmable I/O port
  • 4002: 320-bit DRAM with 4-bit output port
  • 4003: I/O expansion that was a 10-bit static, serial-in, serial-out and parallel-out shift register
  • 4004: 4-bit CPU

Making The 4004 Et Al

In April 1970, Noyce hired Federico Faggin from Fairchild in order to do the chip design. At that time the block diagram and basic specification were done and included the CPU architecture and instruction set. However, the chip’s logic design and layout were supposed to have started in October 1969 and samples for all four chips were due by July 1970. But by April, that work had yet to begin. To make matters worse, the day after Faggin started work at Intel, Shima arrived from Japan to check the non-existent chip design of the 4004. Busicom was understandably upset but Faggin came up with a new schedule which would result in chip samples by December 1970.

Faggin then proceeded to work 80 hour weeks to make up for lost time. Shima stayed on to help as an engineer until Intel could hire one to take his place.

4004 architecture 4004 architecture by Appaloosa CC BY-SA 3.0

Keeping to the schedule, the 4001 ROM was ready in October and worked the first time. The 4002 DRAM had a few simple mistakes, and 4003 I/O chip also worked the first time. The first wafers for the 4004 were ready in December, but when tried, they failed to do anything. It turned out that the masking layer for the buried contacts had been left out of the processing, resulting in around 30% of the gates floating. New wafers in January 1971 passed all tests which Faggin threw at it. A few minor mistakes were later found and in March 1971 the 4004 was fully functional.

In the meantime, in October 1970, Shima was able to return to Japan where he began work on the firmware for Busicom’s calculator, which was to be loaded into the 4001 ROM chip. By the end of March 1971, Busicom had a fully working engineering prototype for their calculator. The first commercial sale was made at that time to Busicom.

The Software Problem

Now that Intel had a microprocessor, they needed someone to write software. At the time, programmers saw prestige in working with a big computer. It was difficult enticing them to stay and work on a small microprocessor. One solution was to trade hardware, a sim board for example, to colleges in exchange for writing some support software. However, once the media started hyping the microprocessor, the college students came banging on Intel’s door.

To Sell Or Not To Sell

Intel D4004 Intel D4004 by Thomas Nguyen CC BY-SA 4.0

Intel’s market was big computer companies and there was concern within Intel that computer companies would see Intel as a competitor instead of a supplier of memory chips. There was also a question about how they would support the product. Some at Intel also wondered whether or not the 4004 could be used for more than just a calculator. But at one point Faggin used the 4004 itself to make a tester for the 4004, proving that there were more uses.

At the same time, cheap $150 handheld calculators were creating difficulties for Busicom’s more expensive $1000 desktop ones. They could no longer pay Intel the agreed contract price. But Busicom had exclusive rights to the MCS-4 chips. And so a fateful deal was made wherein Busicom would pay a lower price and Intel would have exclusive rights. The decision was made to sell it and a public announcement was made in November 1971.

By September 1972 you could buy a 4004 for $60 in quantities of 1 to 24. Overall, around a million were produced. To name just a few applications, it was used in: pinball machines, traffic light controllers, cash registers, bank teller terminals, blood analyzers, and gas station monitors.

Contenders For The Title

Most inventions come about when the circumstances are right. This usually means the inventors weren’t the only ones who thought of it or who were working on it.

AL1 as a microprocessor AL1 as a microprocessor by Lee Boysel

In October 1968, Lee Boysel and a few others left Fairchild Semiconductor to form Four-Phase Systems for the purpose of making computers. They showed their system at the Fall Joint Computer Conference in November 1970 and had four of them in use by customers by June 1971.

Their microprocessor, the AL1, was 8-bit, had eight registers and an arithmetic logic unit (ALU). However, instead of using it as a standalone microprocessor, they used it along with two other AL1s to make up a single 24-bit CPU. They weren’t using the AL1 as a microprocessor, they weren’t selling it as such, nor did they refer to it as a microprocessor. But as part of a 1990 patent dispute between Texas Instruments and another claimant, Lee Boysel assembled a system with an 8-bit AL1 as the sole microprocessor proving that it could work.

Garrett AiResearch developed the MP944 which was completed in 1970 for use in the F-14 Tomcat fighter jet. It also didn’t quite fit the mold. The MP944 used multiple chips working together to perform as a microprocessor.

On September 17, 1971, Texas Instruments entered the scene by issuing a press release for the TMS1802NC calculator-on-a-chip, with a basic chip design designation of TMS0100. However, this could implement features only for 4-function calculators. They did also file a patent for the microprocessor in August 1971 and were granted US patent 3,757,306 Computing systems cpu in 1973.

Another company that contracted LSI work from Intel was the Computer Terminal Corporation (CTC) in 1970 for $50,000. This was to make a single-chip CPU for their Datapoint 2200 terminal. Intel came up with the 1201. Texas Instruments was hired as a second supplier and made samples of the 1201 but they were buggy.

Intel’s efforts continued but there were delays and as a result, the Datapoint 2200 shipped with discrete TTL logic instead. After a redesign by Intel, the 1201 was delivered to CTC in 1971 but by then CTC had moved on. They instead signed over all intellectual property rights to Intel in lieu of paying the $50,000. You’ve certainly heard of the 1201: it was renamed the 8008 but that’s another story.

Do you think the 4004 is ancient history? Not on Hackaday. After [Frank Buss] bought one on eBay he mounted it on a board and put together a 4001 ROM emulator to make use of it.

[Main image source: Intel C4004 by Thomas Nguyen CC BY-SA 4.0]

Chasing the Electron Beam at 380,000 FPS

Analog TV is dead, but that doesn’t make it any less awesome. [Gavin and Dan], aka The Slow Mo Guys recently posted a video about television screens. Since they have some incredible high-speed cameras at their disposal, we get to see the screens being drawn, both on CRT and more modern LCD televisions.

Now we all know that CRTs draw one pixel at a time, drawing from left to right, top to bottom. You can capture this with a regular still camera at a high shutter speed. The light from a TV screen comes from a phosphor coating pained on the inside of the glass screen. Phosphor glows for some time after it is excited, but how long exactly? [Gavin and Dan’s] high framerate camera let them observe the phosphor staying illuminated for only about 6 lines before it started to fade away. You can see this effect at a relatively mundane 2500 FPS.

Cranking things up to 380,117 FPS, the highest speed ever recorded by the duo, we see even more amazing results. Even at this speed, quite a few “pixels” are drawn each frame. [Gavin] illustrates that by showing how Super Mario’s mustache is drawn in less than one frame of slow-mo footage. You would have to go several times faster to actually freeze the electron beam. We think it’s amazing that such high-speed analog electronics were invented and perfected decades ago.

Switching from CRT to LCD, the guys show us how the entire screen stays lit, while refresh runs top to bottom. Experimenting on an iPhone 7+ showed that the screen refresh is always from the top of the screen down, toward the home button. If you change the phone to landscape orientation, it will appear to be refreshing from left to right. All pretty interesting stuff, so check out the video. If you’d like to know more about TV technology, read up on the Sony Trinitron story, or learn about the signals used in displaying video.

Thanks for the tip [Quirin]!

The Noisiest Seven-Segment Display Ever

Few mechanical clocks are silent, and many find the sounds they make pleasant. But the stately ticking of an old grandfather clock or the soothing sound of a wind-up alarm clock on the nightstand are nothing compared to the clattering cacophony that awaits [ProtoG] when he finishes the clock that this electromechanical decimal to binary to hex converter and display will be part of.

Undertaken as proof of concept before committing to a full six digit clock build, we’d say [ProtoG] is hitting the mark. Yes, it’s loud, but the sound is glorious. The video below shows the display being put through its paces, and when the clock rate ramps up, the rhythmic pulsations of the relays driving the seven-segment flip displays is hypnotizing. The relays, one per segment of the Alfa Zeta flip displays, have DPDT contacts wired to flip a segment by reversing polarity. As a work in progress, [ProtoG] hasn’t shared many more details yet, but he promises to keep us up to date on the converter aspect of the circuit. Right now it just seems like a simple but noisy driver. We’ll be following this one with interest.

If you prefer your clocks quieter but still like funky displays, check out this mixed media circus-themed clock.

Opt-Out Fitness Data Sharing Leads to Massive Military Locations Leak

People who exercise with fitness trackers have a digital record of their workouts. They do it for a wide range of reasons, from gathering serious medical data to simply satisfying curiosity. When fitness data includes GPS coordinates, it raises personal privacy concerns. But even with individual data removed, such data was still informative enough to spill the beans on secretive facilities around the world.

Strava is a fitness tracking service that gathers data from several different brands of fitness tracker — think Fitbit. It gives athletes a social media experience built around their fitness data: track progress against personal goals and challenge friends to keep each other fit. As expected of companies with personal data, their privacy policy promised to keep personal data secret. In the same privacy policy, they also reserved the right to use the data shared by users in an “aggregated and de-identified” form, a common practice for social media companies. One such use was to plot the GPS data of all their users in a global heatmap. These visualizations use over 6 trillion data points and can be compiled into a fascinating gallery, but there’s a downside.

This past weekend, [Nathan Ruser] announced on Twitter that Strava’s heatmap also managed to highlight exercise activity by military/intelligence personnel around the world, including some suspected but unannounced facilities. More worryingly, some of the mapped paths imply patrol and supply routes, knowledge security officers would prefer not to be shared with the entire world.

This is an extraordinary blunder which very succinctly illustrates a folly of Internet of Things. Strava’s anonymized data sharing obsfucated individuals, but didn’t manage to do the same for groups of individuals… like the fitness-minded active duty military personnel whose workout habits are clearly defined on these heat maps. The biggest contributor (besides wearing a tracking device in general) to this situation is that the data sharing is enabled by default and must be opted-out:

“You can opt-out of contributing your anonymized public activity data to Strava Metro and the Heatmap by unchecking the box in this section.” —Strava Blog, July 2017

We’ve seen individual fitness trackers hacked and we’ve seen people tracked through controlled domains before, but the global scope of [Nathan]’s discovery puts it in an entirely different class.

[via Washington Post]

More Than Just An Atari Look-Alike

The Raspberry Pi has been a boon for hackers with a penchant for retro gaming. Redditor [KaptinBadkruk] Wanted to get on board the game train and so built himself an Atari 2600-inspired Raspberry Pi 3 console!

A key goal was the option to play Nintendo 64 titles, so [KaptinBadkruk] had to overclock the Pi and then implement a cooling system. A heatsink, some copper pads, and a fan from an old 3D printer — all secured by a 3D printed mount — worked perfectly after giving the heatsink a quick trim. An old speaker and a mono amp from Adafruit — and a few snags later — had the sound set up, with the official RPi touchscreen as a display.

After settling on an Atari 2600-inspired look, [KaptinBadkruk] laboured through a few more obstacles in finishing it off — namely, power. He originally intended for this  project to be portable, but power issues meant that idea had to be sidelined until the next version. However — that is arguably offset by [KaptinBadkruk]’s favourite part: a slick 3D Printed item box from Mario Kart front and center completes the visual styling in an appropriately old-meets-new way.

That item block isn’t the first time a lightshow has accompanied an Atari console, but don’t let that stop you from sticking one in your pocket.

[Via /r/DIY]

Hackaday Links: January 28, 2018

In case you haven’t heard, we have a 3D printing contest going on right now. It’s the Repairs You Can Print Contest. The idea is simple: show off how you repaired something with a 3D printer. Prizes include $100 in Tindie credit, and as a special prize for students and organizations (think hackerspaces), we’re giving away a few Prusa i3 MK3 printers.

[Drygol] has made a name for himself repairing various ‘home’ computers over the years, and this time he’s back showing off the mods and refurbishments he’s made to a pile of Amiga 500s. This time, he’s installing some new RAM chips, fixing some Guru Meditations by fiddling with the pins on a PLCC, adding a built-in modulator, installing a dual Kickstart ROM, and installing a Gotek floppy adapter. It’s awesome work that puts all the modern conveniences into this classic computer.

Here’s an FPGA IoT Controller. It’s a Cyclone IV and a WiFi module stuffed into something resembling an Arduino Mega. Here’s the question: what is this for? There are two reasons you would use an FPGA, either doing something really fast, or doing something so weird normal microcontrollers just won’t cut it. I don’t know if there is any application of IoT that overlaps with FPGAs. Can you think of something? I can’t.

Tide pods are flammable.

You know what’s cool? Sparklecon. It’s a party filled with a hundred pounds of LEGO, a computer recycling company, a plasmatorium, and a hackerspace, tucked away in an industrial park in Fullerton, California. It’s completely chill, and a party for our type of people — those who like bonfires, hammer Jenga, beer, and disassembling fluorescent lamps for high voltage transformers.

A few shoutouts for Sparklecon. The 23b Hackerspace is, I guess, the main host here, or at least the anchor. Across the alley is NUCC, the National Upcycled Computing Collective. They’re a nonprofit that takes old servers and such, refurbishes them, and connects them to projects like Folding@Home and SETI@Home. This actually performs a service for scientists, because every moron is mining Bitcoin and Etherium now, vastly reducing the computational capabilities of these distributed computing projects. Thanks, OSH Park, for buying every kind of specialty pizza at Pizza Hut. I would highly encourage everyone to go to Sparklecon next year. This is the fifth year, and it’s getting bigger and better every time.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl