Hackaday

Syndicate content Hackaday
Fresh hacks every day
ถูกปรับปรุง 21 min 48 sec ก่อน

Cyborg, Or Leafy Sensor Array?

2 hours 17 minก่อน

Some plants react quickly enough for our senses to notice, such as a Venus flytrap or mimosa pudica. Most of the time, we need time-lapse photography at a minimum to notice while more exotic sensors can measure things like microscopic pores opening and closing. As with any sensor reading, those measurements can be turned into action through a little trick we call automation. [Harpreet Sareen] and [Pattie Maes] at MIT brought these two ideas together in a way which we haven’t seen before where a plant has taken the driver’s seat in a project called Elowan. Details are sparse but the concept is easy enough to grasp.

We are not sure if this qualifies as a full-fledged cyborg or if this is a case of a robot using biological sensors. Maybe it all depends on which angle you present this mixture of plant and machine. Perhaps it is truly is the symbiotic relationship that the project claims it to be. The robot would not receive any instructions without the plant and the plant would receive sub-optimal light without the robot. What other ways could plants be integrated into robotics to make it a bona fide cyborg?

Via IEEE Spectrum.

Intel Announces Faster Processor Patched for Meltdown and Spectre

5 hours 16 minก่อน

Intel just announced their new Sunny Cove Architecture that comes with a lot of new bells and whistles. The Intel processor line-up has been based off the Skylake architecture since 2015, so the new architecture is a fresh breath for the world’s largest chip maker. They’ve been in the limelight this year with hardware vulnerabilities exposed, known as Spectre and Meltdown. The new designs have of course been patched against those weaknesses.

The new architecture (said to be part of the Ice Lake-U CPU) comes with a lot of new promises such as faster core, 5 allocation units and upgrades to the L1 and L2 caches. There is also support for the AVX-512 or Advanced Vector Extensions instructions set which will improve performance for neural networks and other vector arithmetic.

Another significant change is the support for 52-bits of physical space  and 57 bits of linear address support. Today’s x64 CPUs can only use bit 0 to bit 47 for an address space spanning 256TB. The additional bits mean a bump to a whooping 4 PB of physical memory and 128 PB of virtual address space.

The new offering was demoed under the company’s 10nm process which incidentally is the same as the previously launched Cannon Lake. The new processors are due in the second half of 2019 and are being heavily marketed as a boon for the Cryptography and Artificial Intelligence Industries. The claim is that for AI, memory to CPU distance has been reduced for faster access, and that special cryptography-specific instructions have been added.

Electric Drift Trike Needs Water Cooling

6 hours 46 minก่อน

Electric vehicles of all types are quickly hitting the market as people realize how inexpensive they can be to operate compared to traditional modes of transportation. From cars and trucks, to smaller vehicles such as bicycles and even electric boats, there’s a lot to be said for simplicity, ease of use, and efficiency. But sometimes we need a little bit more out of our electric vehicles than the obvious benefits they come with. Enter the electric drift trike, an electric vehicle built solely for the enjoyment of high torque electric motors.

This tricycle is built with some serious power behind it. [austiwawa] constructed his own 48V 18Ah battery with lithium ion cells and initially put a hub motor on the front wheel of the trike. When commenters complained that he could do better, he scrapped the front hub motor for a 1500W brushless water-cooled DC motor driving the rear wheels. To put that in perspective, electric bikes in Europe are typically capped at 250W and in the US at 750W. With that much power available, this trike can do some serious drifting, and has a top speed of nearly 50 kph. [austiwawa] did blow out a large number of motor controllers, but was finally able to obtain a beefier one which could handle the intense power requirements of this tricycle.

Be sure to check out the video below to see the trike being test driven. The build video is also worth a view for the attention to detail and high quality of this build. If you want to build your own but don’t want to build something this menacing, we have also seen electric bikes that are small enough to ride down hallways in various buildings, but still fast enough to retain an appropriate level of danger.

Soft Rotating Pneumatic Actuators

8 hours 17 minก่อน

When we think of pneumatic actuators, we typically consider the standard varieties of pneumatic cylinder, capable of linear motion. These can be referred to as “hard” actuators, made of rigid components and capable of great accuracy and force delivery. However, “soft” actuators have their own complementary abilities – such as being able to handle more delicate tasks and being less likely to injure human operators when used in collaborative operations. The Whitesides Research Group at Harvard University has undertaken significant research in this field, and released a paper covering a novel type of soft pneumatic actuator.

The actuator consists of a series of soft, flexible sealed chambers which surround a wooden dowel in the center. By applying vacuum to these various chambers, the dowel in the center can be pulled into up to eight different positions. It’s a unique concept, and one we can imagine could have applications in various material processing scenarios.

The actuator was built by moulding elastomers around 3D printed components, so this is a build that could theoretically be tackled by the DIYer. The paper goes into great detail to quantify the performance of the actuator, and workshops several potential applications. Testing is done on a fluid delivery and stirring system, and a tethered robotic walker was built. The team uses the term cVAMS – cyclical vacuum actuated machine – to describe the actuator technology.

The world of soft robotics is a hot bed of development, and we look forward to further work in this field. It’s not just Harvard, either – we’ve seen interesting work from Yale and from the Hackaday community too!

 

How To Stay Grounded When You Have Zero Potential

9 hours 46 minก่อน

Ground is an interesting topic when it comes to engineering. Either it’s the reference level for a digital circuit (not necessarily at zero volts, either), or it’s the return path for current, or it’s the metal chassis, which shouldn’t be the return path for current or else something’s terribly broken. Erika Earl’s talk at this year’s Hackaday Superconference is all about ground.

The first type of ground to talk about is the ground in your outlets and walls. The AC safety ground is the third pin on your plug that should be attached to the chassis of your washer/dryer on one end, and somehow connected to the neutral wire somewhere near your breaker box. The theory of this being if a conductor touches the chassis of a lamp or appliance, all the current will go along that ground bus saving you from electrocution. It should also trip the circuit breaker.

But really we’re rarely dealing with mains power around here. When it comes to electronic design, we’re mostly dealing with analog grounds and digital grounds in circuits. Sometimes these are the same, sometimes they’re not, but they’re both usually referenced to 0 Volts, Add in some considerations for EMC, and ground loops, and you have an astonishing amount of knowledge wrapped up in having zero potential.

If you want to know about what ground actually is, this isn’t a talk to miss. Erika has tons of experience chasing down grounds as an audio engineer, and her career highlights including the director of hardware engineering at Slate Digital and the Senior Technical Engineer at LA’s legendary Village Recording Studios. There’s a lot of experience here, and if you want to where to find your ground, Erika is the person to ask.

My Oscilloscope Uses Fire

พุธ, 12/12/2018 - 23:30

If you want to visualize sound waves, you reach for your oscilloscope, right? That wasn’t an option in 1905 so physicist [Heinrich Rubens] came up with another way involving flames. [Luke Guigliano] and [Will Peterson] built one of these tubes — known as a Rubens’ tube — and will show you how you can, too. You can see a video of their results, below. Just in case a flame oscilloscope isn’t enough to attract your interest, they are driving the thing with a theremin for extra nerd points.

The guys show a short flame run and one with tall flames. The results are surprising, especially with the short flames. Of course, the time base is the length of the tube, so that limits your measurements. The tube has many gas jets along the length and with a sound source, the height of the flames correspond to the air pressure from the sound inside the tube.

According to their plans, the tube is a 2 inch tube, six feet long. They used a #42 drill bit to create the gas jet holes an inch apart although they mention if they did it again they’d go smaller and space them closer. The working gas is propane and if you want to exactly duplicate their build, you’ll need to weld. They mention, though, that you could probably build it without welding. Total cost? About $350.

You can extend the idea of a Rubens’ tube to a square — we hate to call it a Rubens’ cube. Or you can shrink it down to a single point. Either way, it is fire, so you want to be careful, but there is a certain appeal to it, too. It always amazes us how resourceful people can be when they have to be. The invention of the Rubens’ tube is an example of that, although there were many other ways people made up for not having oscilloscopes.

Warnings On Steroids – Static Code Analysis Tools

พุธ, 12/12/2018 - 22:01

A little while back, we were talking about utilizing compiler warnings as first step to make our C code less error-prone and increase its general stability and quality. We know now that the C compiler itself can help us here, but we also saw that there’s a limit to it. While it warns us about the most obvious mistakes and suspicious code constructs, it will leave us hanging when things get a bit more complex.

But once again, that doesn’t mean compiler warnings are useless, we simply need to see them for what they are: a first step. So today we are going to take the next step, and have a look at some other common static code analysis tools that can give us more insight about our code.

You may think that voluntarily choosing C as primary language in this day and age might seem nostalgic or anachronistic, but preach and oxidate all you want: C won’t be going anywhere. So let’s make use of the tools we have available that help us write better code, and to defy the pitfalls C is infamous for. And the general concept of static code analysis is universal. After all, many times a bug or other issue isn’t necessarily caused by the language, but rather some general flaw in the code’s logic.

Compiler Warnings Recap

But let’s first take a step back again to compiler warnings. If we recall the nonnull attribute which indicates that a function’s parameter can’t and therefore won’t be NULL, we saw that the compiler’s perspective is extremely shortsighted on it:

extern void foo(char *) __attribute__((nonnull)); void bar(void) { char *ptr = NULL; foo(NULL); // warning foo(ptr); // no warning here }

The compiler will warn about the foo(NULL) call, as it is an obvious violation of the nonnull declaration, but it won’t realize that the second call will eventually also pass NULL as parameter. To be fair though, why should it understand that, its primary job is to generate a machine-readable executable from our source code?

Now, this example is a rather clear case, and while the compiler may not warn about it, it is still easy to spot. If you have decent code review practices in place, it should be straightforward to detect the mishap. But sometimes it’s just us by ourselves, no other developer to review our code, and due to tiredness or other reasons, it might simply slip by our eyes. Other times, the potential issue hiding underneath is a lot less obvious, and it might take a whole series of unfortunate events for it to become an actual problem. We’d have to go mentally through every possible execution path to be sure it’s all good.

Either way, it rather sounds like a waste of time to use manual labor for something that practically screams for automatization. So let’s have a look at a few common tools made just for that. Note that we’ll be merely scratching the surface here, consider this more a brief overview of what tools are available.

Static Code Analysis Tools

Static code analysis involves inspecting our program just by analyzing its source code, without ever executing it. For example, it won’t consider the actual data that is processed in a set of functions, but instead make sure that data is passed along and handled in a safe and logical way. This is certainly a subject where throwing money at the problem will get you bigger and shinier tools, and while they have their place in the professional world, we’ll focus on the everyday hacker tinkering on their free time projects, and see what the open source community has to offer.

While the initial example was good to recall the shortcomings of compiler warnings, demonstrating the full strength of the other tools cannot be done with a simple scenario. The best way is to see for yourself by using them on either your own code, some other tools and programs you frequently compile or use, or then browsing for some random projects on GitHub and the likes.

clang

Yes, let’s start with clang. But before you start to groan and think “drop the compiler warnings already and move on”, there’s more to clang than its compiler infrastructure, such as its own static code analyzer. It supports the same targets clang does, and can be invoked by preceding your usual build command with the scan-build command.

$ scan-build clang -o foo foo.c

The analyzer doesn’t necessarily require clang as compiler, so this will work as well:

$ scan-build gcc -o foo foo.c

Or then you just run make:

$ scan-build make ... scan-build: n bugs found scan-build: Run 'scan-view /tmp/scan-build-xyz' to examine bug reports. $

While you can’t simply pass a list of source files to scan-build, but rather need to perform an actual build, it has the advantage that the compilation and analysis are done at the same time. This makes the analysis part of the build process itself, instead of some tedious extra task you should always remember about. After all, it’s up to us to actually use and act on what the tools can provide us. The less they interfere with our flow, the less reluctant we might be to eventually use them and see what they have to say.

Speaking of seeing what they have to say, if you take another look at the last output line scan-build displays, you will find a command to display the results of the analysis. Behind the scan-view command is a simple Python script that starts a local web server and opens the report overview page in your browser. You’ll get more or less the same if you just open file:///tmp/scan-build-xyz/index.html in your browser, and in case you despise anything that doesn’t run in a terminal, this works well enough in your common text mode browsers.

When running scan-build, it might for example output that in a specific place NULL might be passed somewhere where it shouldn’t be, but it won’t tell you under which circumstances. The great thing about the browser-based report here is that you can navigate through the code and follow step by step, for each loop and condition branch, how a potential issue might turn into a bug. Keep in mind that the program is never actually run, so you might encounter some false positives that are never a valid or possible scenario in reality. The other way around, each tool has a different focus, so some issues might not even be considered.

Static code analysis is by no means a one-size-fits-all job, so it won’t hurt to use more than a single tool for it. Well, let’s move on to the next one then.

(sp)lint

The probably best known tool for static code analysis is lint, which has somewhat become a synonym for static code analysis itself. In your average Linux distribution, you should find splint as one implementation of it. Unlike clang‘s static analyzer, splint will take the source files and analyzes them without running any compilation.

$ splint foo.c ... Finished checking --- 3 code warnings $

splint is a quite complex tool with plenty of flags to enable and disable checks, and control its behavior. It also comes with its own source code annotations defined with a special formatted comment /*@annotation@*/ that will influence what is analyzed and reported. Whether you like this sort of (debatable) noise in your code is of course up to you.

You should probably be aware though that the latest release of splint is from 2007. Of course, that doesn’t mean it’s outdated, plenty of potential issues are timeless and have been around for longer than the last 11 years. Theoretically, you should also be able to use splint for code targeting for example AVR microcontrollers, but that might have some emphasis on the “theoretical” part. It will generally take a lot of tweaking and digging through the output to get the most out of it. If you are curious and persistent enough, the splint manual is probably a good place to start.

flawfinder

As mentioned before, every tool usually has a different focus area. In case of flawfinder, that focus is security vulnerabilities, Common Weakness Enumerations (CWE) in particular. While this offers a generally good overview of insecure C functions and practices, it mainly warns whenever a dangerous construction is detected. It doesn’t seem to check if there is an actual problem in the code, just that there might be in case you end up using it wrong.

Nevertheless, there is a reason for the word common in CWE, so even though you made sure everything is okay with your current implementation, it doesn’t hurt to be reminded every once in a while about those common weaknesses, without proactively digging through every man page. And on a side note, the author or flawfinder has also written a book about secure programming and released it under the GNU Free Documentation License, in case you want to read up some more on that topic.

cppcheck

The last tool we’ll be mentioning, albeit its misleading name, is cppcheck, which covers both C++ and C, and focuses on undefined behavior. If you can afford or already possess the MISRA rule texts, you can include them as well. Some of them are also covered out of the box, and of course, it’s still a fully functional code analyzer even without purchasing the rule texts.

cppcheck also lets you write your own rules, and reports its finding either as custom formattable text, or as XML, and offers integration to most common IDEs. And in case you want to click something every once in a while, or are otherwise somewhat put off by wading through walls of console text, it also comes with a graphical user interface as alternative to the command line, which will show the reported issues along with the matching source code.

Honorable Mention

One more tool that sounds promising and might be worth looking into is frama-c.

Limitations

Clearly, no single tool can analyze and detect every possible flaw, otherwise the list would have been a lot shorter. And just as some tools will miss some issues, they can also overcompensate by enthusiasticly reporting what turn out to be false positives. As mentioned before, you need to decide for yourself which warning you consider valid and something you need to address. This may seem tedious and a waste of time — exactly what the tools were supposed to help you avoid. And maybe it often is, but it will also help you to better understand your own code, and see some of its implications from an angle you never may have considered. And when it does find a rare bug, it’ll pay off.

After some initial fiddling with the tools, you will also notice that some of them will require a lot of tweaking to get the most out of them, as was already mentioned for splint. So it’s again up to you to weigh whether investing that time will be worth in the long run. Unlike compiler warnings, getting rid of each and every warning from code analysis tools might not be the most rewarding process, especially when so many are false positives. Coders discretion is advised.

Of course, static code analysis has by design the limitation that actual data and its meaning is neither considered nor checked. An int is an int, and as long as we don’t cause an overflow or other operations that violate the language specification or end in undefined territory, we’ll be most likely good to go from static code analysis point of view. It won’t detect or care if the int‘s value must be in a certain range in order to make sense and cause no harm in the rest of the program’s context, for instance. We’d have to actually execute our code to know what’s happening there. So with that being said, next time we will talk about assertions, and why it’s often better to go out with a bang early on.

A Pi Cluster to Hang in Your Stocking with Care

พุธ, 12/12/2018 - 19:00

It’s that time of year again, with the holidays fast approaching friends and family will be hounding you about what trinkets and shiny baubles they can pretend to surprise you with. Unfortunately there’s no person harder to shop for than the maker or hacker: if we want it, we’ve probably already built the thing. Or at least gotten it out of somebody else’s trash.

But if they absolutely, positively, simply have to buy you something that’s commercially made, then you could do worse than pointing them to this very slick Raspberry Pi cluster backplane from [miniNodes]. With the ability to support up to five of the often overlooked Pi Compute Modules, this little device will let you bring a punchy little ARM cluster online without having to build something from scratch.

The Compute Module is perfectly suited for clustering applications like this due to its much smaller size compared to the full-size Raspberry Pi, but we don’t see it get used that often because it needs to be jacked into an appropriate SODIMM connector. This makes it effectively useless for prototyping and quickly thrown together hacks (I.E. everything most people use the Pi for), and really only suitable for finished products and industrial applications. It’s really the line in the sand between playing around with the Pi and putting it to real work.

[miniNodes] calls their handy little device the Carrier Board, and beyond the obvious five SODIMM slots for the Pis to live in, there’s also an integrated gigabit switch with an uplink port to get them all connected to the network. The board powers all of the nodes through a single barrel connector on the side opposite the Ethernet jack, leaving behind the masses of spider’s web of USB cables we usually see with Pi clusters.

The board doesn’t come cheap at $259 USD, plus the five Pi Compute Modules which will set you back another $150. But for the ticket price you’ll have a 20 core ARM cluster with 5 GB of RAM and 20 GB of flash storage in a 200 x 100 millimeter (8 x 4 inch) footprint, with an energy consumption of under 20 watts when running at wide open throttle. This could be an excellent choice for mobile applications, or if you just want to experiment with parallel processing on a desktop-sized device.

Amazon is ready for the coming ARM server revolution, are you? Between products like this and the many DIY ARM clusters we’ve seen over the years, it looks like we’re going to be dragging the plucky architecture kicking and screaming into the world of high performance computing.

[Thanks to Baldpower for the tip.]

Toast Printer Prints Tasty Images And Weather Forecasts

พุธ, 12/12/2018 - 16:00

Electrical Engineering degrees usually focus on teaching you useful things, like how to make electronic devices that actually work and that won’t kill you. But that doesn’t mean that you can’t have some fun on the way. Which is what Cornell students [Michael Xiao] and [Katie Bradford] decided to do with T.O.A.S.T: The Original Artistic Solution for Toast. In case the name didn’t give it away, this is a toast printer. The user supplies an image and a bit of bread, and the T.O.A.S.T prints the image onto the toast. Alternatively, the printer can show you the weather by printing a forecast onto your daily bread.

[Xiao] and [Bradford] programmed a Raspberry Pi W to handle most of the heavy lifting, converting the image or the weather forecast into a 10 by 10 matrix, which is then sent to the PIC32. This drives two motors that move a heat gun. To turn a 1 in this matrix into a toasted spot, the motors pause over one spot of the bread, creating a nice toasty spot. The whole thing is mounted onto a laser-cut frame, with a 3D printed holder for the heat gun. There is, unfortunately, no butter or jam dispenser, but if you were to combine this with the Toast-Bot, you might get the finished product. That might be a postgraduate level build, though.

 

 

 

FPGA Hack Becomes An Atari Game Genie

พุธ, 12/12/2018 - 13:00

The Game Genie is a classic of the early 90s video game scene. It’s how you would have beaten the Ninja Turtles game, and it’s why the connector in your NES doesn’t work as it should. They never made a Game Genie for the Atari 2600, though, because by the time the Game Genie was released, the Atari was languishing on the bottom shelves of Toys R Us. Now though, we have FPGAs and development tools. We can build our own. That’s exactly what [Andy] did, and his Game Genie for the 2600 works as well as any commercial product you’d find for this beleaguered console.

To understand how to build a Game Genie for an Atari, you first have to understand how a Game Genie works. The hacks for a Game Genie work by replacing a single byte in the ROM of a game. If your lives are stored at memory location 0xDEAD for example, you would just change that byte from 3 (the default) to 255 (because that’s infinite, or something). Combine this with 6-letter and 8-letter codes that denote which byte to change and what to change it to, and you have a Game Genie.

This build began by setting up a DE0 Nano FPGA development board to connect to an Atari 2600 cartridge. Yes, there are voltage level differences, but this can be handled with a few pin assignments. Then, it’s just a matter of writing Verilog to pass all the data from one set of address and data pins to another set of address and data pins. The FPGA becomes a man-in-the-middle attack, if you will.

With the FPGA serving as a pass-through for the connections on the cartridge, it’s a simple matter to hard-code cheats into the device. For the example, [Andy] found the code for a game, figured out where the color of the fireballs were defined as red, and changed the color to blue. It worked, and all was right with the world. The work was then continued to create a user interface to enter three cheat codes, and finally wrapped up in a 3D printed enclosure. Sure, the Atari Game Genie works with ribbon cables, but it wouldn’t be that much more work to create a similar project with Lock-On technology. You can check out the entire build video below, or get the info over on Element14

Improving Depth Of Field With Only 5 Phones

พุธ, 12/12/2018 - 10:00

The hottest new trend in photography is manipulating Depth of Field, or DOF. It’s how you get those wonderful portraits with the subject in focus and the background ever so artfully blurred out. In years past, it was achieved with intelligent use of lenses and settings on an SLR film camera, but now, it’s all in the software.

The franken-camera rig, consisting of five Pixel 3 smartphones. The cameras are synchronised over WiFi.

For the Pixel 2 smartphone, Google had used some tricky phase-detection autofocus (PDAF) tricks to compute depth data in images, and used this to decide which parts of images to blur. Distant areas would be blurred more, while the subject in the foreground would be left sharp.

This was good, but for the Pixel 3, further development was in order. A 3D-printed phone case was developed to hold five phones in one giant brick. The idea was to take five photos of the same scene at the same time, from slightly different perspectives. This was then used to generate depth data which was fed into a neural network. This neural network was trained on how the individual photos relate to the real-world depth of the scene.

With a trained neural network, this could then be used to generate more realistic depth data from photos taken with a single camera. Now, machine learning is being used to help your phone decide which parts of an image to blur to make your beautiful subjects pop out from the background.

Comparison images show significant improvement of the “learned” depth data versus just the stereo-PDAF generated depth data. It’s yet another shot fired in the smartphone camera arms race, that shows no signs of abating. We just wonder when the Geiger counter mods are going to ship from factory.

[via AndroidPolice]

MakerBot Moves Away From Makers with New Printer

พุธ, 12/12/2018 - 07:01

If you’ve been following the desktop 3D printing market for the last couple years, you’re probably aware of the major players right now. Chinese companies like Creality are dominating the entry level market with machines that are priced low enough to border on impulse buys, Prusa Research is iterating on their i3 design and bringing many exciting new features to the mid-range price point, and Ultimaker remains a solid choice for a high-end workhorse if you’ve got the cash. But one name that is conspicuously absent from a “Who’s Who” of 3D printing manufacturers is MakerBot; despite effectively creating the desktop 3D printing market, today they’ve largely slipped into obscurity.

So when a banner popped up on Thingiverse (MakerBot’s 3D print repository) advertising the imminent announcement of a new printer, there was a general feeling of surprise in the community. It had been assumed for some time that MakerBot was being maintained as a zombie company after being bought by industrial 3D printer manufacturer Stratasys in 2013; essentially using the name as a cheap way to maintain a foothold in the consumer 3D printer market. The idea that they would actually release a new consumer 3D printer in a market that’s already saturated with well-known, agile companies seemed difficult to believe.

But now that MakerBot has officially taken the wraps off a printer model they call Method, it all makes sense. Put simply, this isn’t a printer for us. With Method, MakerBot has officially stepped away from the maker community from which it got its name. While it could be argued that their later model Replicator printers were already edging out of the consumer market based on price alone, the Method makes the transition clear not only from its eye watering $6,500 USD price tag, but with its feature set and design.

That said, it’s still an interesting piece of equipment worth taking a closer look at. It borrows concepts from a number of other companies and printers while introducing a few legitimately compelling features of its own. While the Method might not be on any Hackaday reader’s holiday wish list, we can’t help but be intrigued about the machine’s future.

A Method to the Madness

Method relies heavily on concepts and technology inherited from parent company Stratasys, and bears little resemblance to previous MakerBots or even contemporary desktop 3D printers. It represents the merging of the desktop and industrial 3D printing markets which many assumed would follow MakerBot’s acquisition, it just took a lot longer to materialize than anyone expected.

As with previous MakerBot printers, the Method sticks with the dual extruder design that other manufacturers have largely abandoned in favor of a single extruder with material switching capability. Dual extruders are notoriously tricky to calibrate and maintain, but it’s a much faster approach to multi-material printing thanks to the fact that the nozzle doesn’t have to purge itself every time the material changes. As MakerBot is advertising the Method as “The First Performance 3D Printer”, the speed advantage of sticking with dual extruders is clearly worth dealing with some additional engineering challenges for them.

Other interesting features include a heated build chamber which uses convection to maintain a consistent temperature throughout the internal volume. We’re used to seeing heated beds on desktop 3D printers which do an acceptable job of keeping the lower layers warm to prevent warping, but they do no good past a few tens of millimeters. With a heated enclosure however, the entire print is maintained at the same temperature for more consistent results. This is something individuals often attempt to DIY for their own desktop 3D printers, and is a perfect example of MakerBot taking something which is usually seen only in high end industrial machines and bringing it a bit closer to the mainstream.

Method also features internal filament storage bays which MakerBot claims are almost completely sealed from the outside environment. Internal sensors even monitor the humidity in the filament to verify optimal performance. Again, this is a feature we’re no stranger to seeing hacked onto existing 3D printers. Like the convection heated chamber, this is technology which has been unavailable on traditional desktop 3D printers; though we did see at least a prototype of a new machine which was promising to bring this feature to the masses at this year’s East Coast RepRap Festival.

Familiar Features

While Method undoubtedly offers some features which to date have been nearly unheard of in desktop 3D printers, MakerBot has also clearly kept a close eye on the competition over the last couple of years. The Method has a number of features which we’ve already seen on consumer-level printers, which some might say call into question the machine’s sky-high price.

The color touch screen mounted in the top of Method reminds us of PrintrBot’s attempt to make 3D printing more user friendly with a similar interface on their Simple Pro. Unfortunately the community didn’t embrace the paradigm shift which the Simple Pro represented, ultimately leading to PrintrBot closing their doors a few months ago. Be that as it may, it does seem inevitable that 3D printers will get the same touch user interface that everything else seems to have adopted in the 21st century, so we doubt MakerBot will be alone in trying to introduce this on their printers going forward.

Method also features a few enhancements with which owners of the Prusa i3 MK3 will already be well acquainted. A removable magnetic build plate, automatic filament loading, and encoder-based jam detection are all features touted by Method which are also available on Josef Prusa’s latest printer (at about 1/6th the price).

You could even argue that Method’s onboard camera, remote WiFi control, and extensive use of sensors to monitor the current print were inspired by OctoPrint and its vast array of community developed plugins. The OctoPrint community has been pushing the envelope in terms of 3D printer control and monitoring, and has already gotten the attention of a number of other 3D printer manufacturers. Adding MakerBot to the list of companies impressed by what the open source community has managed to do with their 3D printers seems a safe bet.

Industrial Levels of Irony

This time last year there was a glimmer of hope that MakerBot was attempting to make a return to their roots with the announcement of “MakerBot Labs”, and prior to the unveiling of Method some even theorized that the new MakerBot might take the form of a rebranded Chinese printer; allowing them to offer their support and educational curriculum on a vastly cheaper printer. Now that Method is revealed we can see that nothing is farther from the truth. With this new machine, MakerBot is doubling down on the industrial and manufacturing markets in which their Replicator printers still had some sway, and omitting the consumer market entirely.

In doing so, MakerBot has truly come full circle. When they released their first 3D printers in 2009, it was precisely because nobody was selling that kind of hardware outside of the industrial setting. They set the stage for the desktop 3D printing revolution, and for years other manufacturers were rushing to play catch up with MakerBot’s latest by releasing their own cheaper clones. With the unveiling of Method, MakerBot has shown that not only are they the ones desperate to copy the trend-setting features of their competitors this time around, but that their priority is selling this new machine to the very market they sought to rebel against a decade ago. With that in mind, you’d be hard pressed to find a less expensive industrial 3D printer. Makerbot definitely priced outside the consumer level, but if your alternative is a $20,000 industrial printer, this might seem like a great deal.

Open Hardware Board For Robust USB Power Monitoring

พุธ, 12/12/2018 - 04:00

We’ve all seen the little USB power meters that have become popular since nearly every portable device has adopted some variation of USB for charging. Placed between the power source and the device under test, they allow you to see voltage and current in real time. Perfect for determining how long you’ll be able to run a USB powered device on batteries, or finding out if a USB power supply has enough current to do the business.

[Jonas Persson] liked the idea of these cheap little gadgets, but wanted something a bit more scientific. His design, which he refers to as UPM, is essentially a “smart” version of those ubiquitous USB gadgets. Instead of just showing the data on a little LCD screen, it can now be viewed on the computer and analyzed. His little gadget even allows you to cut power to the device under test, potentially allowing for automated testing of things such as inrush current.

Essentially the UPM works in much the same way as the simple USB meters: one side of the device goes towards the upstream power source, and the device under test plugs into the other side. Between the two devices is a 16 bit ADC and differential amplifier which measures the voltage and current. There’s a header on the board which connects to the ADC if you wanted to connect the UPM to an external microcontroller or other data logging device.

But most likely you would be using the internal microcontroller to analyze the output of the ADC over I2C, which [Jonas] very cleverly connected to the upstream port with an integrated USB hub. One side of the hub goes off to the device being tested, and the other to the microcontroller. So the host device will see both the UPM’s integrated microcontroller and the target device at the same time. From there, you can use the ncurses user interface to monitor and control the device in real-time.

While the hardware looks more or less finished, [Jonas] has some more plans for the software side of UPM, including support for remote control and monitoring over TCP/IP as well as robust logging capabilities. This is definitely a very interesting project, and we’re excited to see it develop further.

In the past we’ve seen homebrew USB power meter builds, and even commercial offerings which boasted computer-based logging and analysis, so it was only a matter of time before somebody combined them into one.

Modified F Clamp is Wheely Good

พุธ, 12/12/2018 - 02:30

Sometimes, a job is heavy, messy, or unwieldy, and having an extra pair of hands to help out makes the job more than twice as easy. However, help isn’t always easy to find. Faced with this problem, [create] came up with an ingenious solution to help move long and heavy objects without outside assistance.

Simple, and effective.

The build starts with a regular F-clamp  – a familiar tool to the home woodworker. The clamp is old and worn, making it the perfect candidate for some experimentation. First off, the handle is given a good sanding to avoid the likelihood of painful splinters. Then, the top bar is drilled and tapped, and some threaded rod fitted to act as an axle. A polyurethane wheel from a children’s scooter is then fitted, and held in place with a dome nut.

The final product is a wheel that can be clamped to just about anything, making it easier to move. [create] demonstrates using the wheelclamp to move a long piece of lumber, but we fully expect to see these on the shelf of Home Depot in 12 months for moving furniture around the house. With a few modifications to avoid marring furniture, these clamps could be a removalist’s dream.

While you’re busy hacking your tools, check out these useful bar clamps, too. Video after the break.

 

Julius Lilienfeld and the First Transistor

พุธ, 12/12/2018 - 01:00

Here’s a fun exercise: take a list of the 20th century’s inventions and innovations in electronics, communications, and computing. Make sure you include everything, especially the stuff we take for granted. Now, cross off everything that can’t trace its roots back to the AT&T Corporation’s research arm, the Bell Laboratories. We’d wager heavily that the list would still contain almost everything that built the electronics age: microwave communications, data networks, cellular telephone, solar cells, Unix, and, of course, the transistor.

But is that last one really true? We all know the story of Bardeen, Brattain, and Shockley, the brilliant team laboring through a blizzard in 1947 to breathe life into a scrap of germanium and wires, finally unleashing the transistor upon the world for Christmas, a gift to usher us into the age of solid state electronics. It’s not so simple, though. The quest for a replacement for the vacuum tube for switching and amplification goes back to the lab of  Julius Lilienfeld, the man who conceived the first field-effect transistor in the mid-1920s.

Vacuums and Emissions Julius Edgar Lilienfeld. Source: Emilio Segre Visual Archives, Physics Today Collection via the Computer Hsitory Museum.

You’d expect big things from a physicist whose Ph.D. advisor was none other than Max Planck, and while Julius Lilienfeld isn’t exactly a household name, he had a long and productive career. Born in 1882 in present-day Lviv, now in Ukraine but then part of Austria-Hungary, Lilienfeld trained at the Friedrich Wilhelm University in Berlin. He earned his doctorate in physics in 1905 and took a non-tenure track professorship at Leipzig University. There he concentrated on the physics of electric discharges in a vacuum, which led directly to some of his earliest patents for medical X-ray tubes. He also worked on cryogenic gasses, leading to work with Count Ferdinand von Zeppelin and his famous dirigibles.

Clearly more of an applied physicist than a theoretician – he only achieved a “satisfactory” rating from Planck on his examination of his knowledge of Maxwell-Hertz equations – Lilienfeld was more eager to patent his ideas than publish scholarly papers on them. He traveled to the United States in 1921 to pursue patent claims on his X-ray tubes against General Electric, and to take some temporary lecturing assignments at New York University.

It was during this period that Lilienfeld first recorded his idea about semiconductor switches. How exactly he came to the conclusion that semiconductors could be used to replace vacuum tubes for switching and amplification is lost to history. It may have stemmed from his early interest in vacuum discharge and his study of field emission, the emission of electrons by electrostatic fields, as opposed to the thermionic emission of vacuum tubes. He was also interested in solid-state rectifiers, and patented an idea for one using compressed copper and sulfur powder.

Thin Films

What is clear is that in 1926, he filed for a patent on his “Method and apparatus for controlling electric currents,” which followed up a 1925 Canadian application for the same. The application described a film of “uni-directional conductivity” across two closely spaced metal electrodes. Lilienfeld suggested that copper sulfide would be a suitable compound for the film, and he described a number of methods for depositing it, including vacuum sputtering. He also describes the control of current through the thin film of copper sulfide by “applying thereto an electrostatic force” through a third electrode located between the two others. Julius Lilienfeld had described a thin-film FET perfectly, and went on to explain how such a device could be used to switch and amplify currents.

From Lilienfeld’s patent. Compare this to the modern design of a MOSFET.

Sadly, Lilienfeld does not appear to have ever built a working prototype of his device; indeed, there’s no evidence that he ever even tried. He also never published any of his work other than as patent applications, so the world would remain ignorant of his insights for another two decades, when Bell Labs started working on what would become the transistor. In fact, Bill Shockley, always included in the official history of the birth of the transistor as the leader of the team, was in fact pointedly excluded by AT&T management from the original point-contact transistor patent application specifically because of Lilienfeld’s patents; the AT&T patent attorneys feared Shockley’s field-effect theories would cause confusion with Lilienfeld’s prior art.

While Lilienfeld missed out on credit for the first FET, he had a long and productive career. In 1926 he left academia, taking an R&D position with Ergon Research Laboratories in Massachusetts. There he invented the electrolytic capacitor in 1931. He amassed a respectable collection of patents over his career, 15 German and 60 from the US and Canada. His skills as an inventor earned him enough that his widow, Beatrice Ginsburg Lilienfeld, left a bequest to the American Physical Society to fund an annual prize in her husband’s memory. Sadly, the APS officials had no idea who Julius was, and had to look up his accomplishments before establishing the Julius Edgar Lilienfeld Prize in 1988.

It’s a shame Lilienfeld didn’t try to build one of his devices, but even if he had, it likely would not have worked exactly as described, given the materials he had available at the time. Julius Lilienfeld was clearly ahead of his time, but it’s tantalizing to think what might have been if the transistor had been invented twenty years earlier.

Off Road Vehicle has Six Wheels and Fluid Power

อังคาร, 12/11/2018 - 23:30

What has six wheels and runs on water? Azaris — a new off-road vehicle prototype from Ferox. Azaris has a rocker suspension modeled after the one on the Mars rover. The problem is, linking four drive wheels on a rocker suspension would be a nightmare. The usual solution? Motors directly in the wheels. But Ferrox has a different approach.

The vehicle has a conventional BMW motorcycle engine but instead of driving a wheel, it drives a pump. The pump moves fluid to the wheels where something similar to a water wheel around the diameter of the wheel causes rotation. The fluid is mostly water and the pressure is lower than a conventional hydraulic system. Auto Times has a video of some stills of the prototype and you can see it below. We haven’t actually seen it in motion, unfortunately.

According to media reports, the pressure runs from 200 to 1,000 PSI which is a lot lower than in a conventional system. The motorcycle engine provides 100 horsepower and could be replaced by an electric motor if desired. They also quote the motors as being 98 percent efficient, although we think that means considering the energy in the water pump’s output to the wheel’s energy, not the thermal efficiency starting with the motor, as an internal composition engine is doing great to accomplish 50% efficiency.

The selling point is that you can drive wheels using a liquid drivetrain that is supremely flexible. This allows you to do things like the rover-style suspension that would ordinarily require heavy motors in the wheels. The liquid motors are about 24 pounds each. Equivalent electric motors could weigh up to 66 pounds. The motorcycle engine doesn’t even max out the amount of power the wheels could deliver, so there’s room to grow.

These are prototypes and not for sale at the moment, unfortunately. They are simply to showcase Ferox technology. We couldn’t help but wonder if anyone has tried a similar scheme for a robot or other hacker drivetrain?

We’ve seen water power charging a cell phone, but that’s hardly the same thing. We also saw some pretty conventional but tiny hydraulics in this model excavator. The idea is the same though. Use hydraulics to move something with a remote motor.

 

ABS: Three Plastics in One

อังคาร, 12/11/2018 - 22:01

It would be really hard to go through a typical day in the developed world without running across something made from ABS plastic. It’s literally all over the place, from toothbrush handles to refrigerator interiors to car dashboards to computer keyboards. Many houses are plumbed with pipes extruded from ABS, and it lives in rolls next to millions of 3D-printers, loved and hated by those who use and misuse it. And in the form of LEGO bricks, it lurks on carpets in the dark rooms of children around the world, ready to puncture the bare feet of their parents.

ABS is so ubiquitous that it makes sense to take a look at this material in terms of its chemistry and its properties. As we’ll see, ABS isn’t just a single plastic, but a mixture that takes the best properties of its components to create one of the most versatile plastics in the world.

All for One

Unlike simple plastics such as polylactic acid (PLA), which we discussed earlier, ABS, or acrylonitrile butadiene styrene, is a copolymer. That means that instead of linking together a single type of monomer into long chains, multiple different monomers are linked together. In the case of ABS, it’s three monomers, and they’re all right there in the name — acrylonitrile, butadiene, and styrene.

Styrene acrylonitrile resin pellets. Source: Ashwini Polymers

How these three components got together is an interesting story. Back as far as the 1930s, chemists in the still-infant polymer industry were experimenting with both polyacrylonitrile (PAN) and polystyrene (PS). PAN was an early success in the search for synthetic fibers, resulting in such products as Orlon acrylic. The synthetic fibers, as well as other copolymers of PAN, found their way into everything from clothing to carpets. Polystyrene was also commercialized around the same time, initially as a replacement for zinc in the die-casting of small parts. The clear plastic was hard, but brittle and with a low melting point.

By the 1940s, polymer chemists had figured out that copolymerizing styrene and acrylonitrile monomers would result in a plastic that was still clear like polystyrene, but less brittle and with better thermal properties. The plastic was called styrene-acrylonitrile (SAN) and found use in the food packing industry, and for making a variety of consumer products.

While SAN was better than pure polystyrene for many applications, it still lacked enough toughness to be used for some applications. So in the late 1940s, chemists decided to add another polymer to the mix: polybutadiene. Polybutadiene, or synthetic rubber, had been around since 1910 and has many of the properties of natural rubber — flexibility, ductility, wear resistance — leading to its use in everything from motor vehicle tires, to seals and gaskets, to golf balls.

Add a Little Rubber Synthetic rubber (polybutadiene). Source: Hills Rubber Co.

Butadiene seemed like a natural addition to the polymer soup of SAN, and in 1948, ABS was invented. With the strength of acrylic, the hardness of styrene, and the toughness of butadiene, the copolymer had a lot of new applications waiting for it. This was mainly due to the ability to control its properties by changing the mix of the three components. The plastic can be optimized for the application, and for the manufacturing method — tweak it a little and you’ve got a plastic with great properties for extrusion, allowing such products as ABS pipes and 3D-printer filament. Change the mix and the plastic works better for injection molding, resulting in myriad parts small and large, including, of course, LEGO.

It would be a mistake to think that ABS is simply three plastics melted together, though. ABS manufacture is a complicated process that has changed markedly over the years. Initially, an emulsion process was favored, where butadiene was first batch polymerized in an aqueous solution; acrylonitrile and styrene monomers were then added to the synthetic latex soup and allowed to polymerize. Little particles of ABS would form in the liquid, which could be extracted through centrifugation.

Polymerization of ABS. The three monomers are above; the polymer is below. X, Y, and Z vary depending on the properties desired, but typically leave the finished polymer with 15 to 35% acrylonitrile, 5 to 30% butadiene and 40 to 60% styrene. Source: Compound Interest.

The mass polymerization method is favored now. In it, polybutadiene is produced separately, chopped into tiny fragments, and mixed with the acrylonitrile and styrene monomers. The batch is vigorously mixed during polymerization, which shears the polybutadiene chunks into even smaller sizes. The degree of mixing controls the size of the rubber particles and therefore the toughness of the plastic. It also controls the glossiness of the finished plastic, which is always very glossy when produced by the emulsion method since the liquid latex emulsion results in very small butadiene particles.

New ABS for Old

The market for ABS is huge. About 10.8 million tons of ABS resin were produced in 2016, and demand increases as new consumer products take advantage of the wide range of properties that can be dialed into the resin by changing the recipe slightly. Production of ABS is resource intensive, though. All of the monomers are derived at least in part from petroleum, and takes a lot of energy to create. Luckily, ABS is very recyclable, with used plastic being shredded and mixed to virgin ABS to form new pellets of almost the same quality of new plastic.

[Featured images credit: Alan Chia [CC BY-SA 2.0], via Wikimedia Commons]

True Transparent Parts from a Desktop 3D Printer

อังคาร, 12/11/2018 - 19:00

We’re no strangers to seeing translucent 3D printed parts: if you print in a clear filament with thin enough walls you can sorta see through the resulting parts. It’s not perfect, but if you’re trying to make a lamp shade or decorative object, it’s good enough. You certainly couldn’t print anything practical like viewing windows or lenses, leaving “clear” 3D printing as more of a novelty than a practical process.

But after months of refining his process, [Tomer Glick] has finally put together his guide for creating transparent prints on a standard desktop FDM machine. It doesn’t even require any special filament, he says it will work on PLA, ABS, or PETG, though for the purposes of this demonstration he’s using the new Prusament ABS. The process requires some specific print settings and some post processing, but the results he’s achieved are well worth jumping though a few hoops.

According to [Tomer] the secret is in the print settings. Essentially, you want the printer to push the layers together far closer than normal, in combination with using a high hotend temperature and 100% infill. The end result (hopefully) is the plastic being laid down by the printer is completely fused with the preceding one, making a print that is more of a literal solid object than we’re used to seeing with FDM printing. In fact, you could argue these settings generate internal structures that are nearly the polar opposite of what you’d see on a normal print.

The downside with these unusual print settings is that the outside of the print is exceptionally rough and ugly (as you might expect when forcing as much plastic together as possible). To expose the clear internals, you’ll need to knock the outsides down with some fairly intense sanding. [Tomer] says he starts with 600 and works his way up to 4000, and even mentions that when you get up to the real high grits you might as well use a piece of cardboard to sand the print because that’s about how rough the sandpaper would be anyway.

[Tomer] goes on to demonstrate a printed laser lens, and even shows how you can recreate the effect of laser-engraved acrylic by intentionally putting voids inside the print in whatever shape you like. It’s a really awesome effect and honestly something we would never have believed came off a standard desktop 3D printer.

In the past we’ve seen specialized filament deliver some fairly translucent parts, but those results still weren’t as good as what [Tomer] is getting with standard filament. We’re very interested in seeing more of this process, and are excited to see what kind of applications hackers can come up with.

Biology Lab on Your Christmas List

อังคาร, 12/11/2018 - 16:00

We hope you have been good this year because we have a list to start your own biology lab and not everything will fit into Santa’s bag (of holding). If you need some last minute goodie points, Santa loves open-source and people who share on our tip line. Our friends at [The Thought Emporium] have compiled a list of the necessary equipment for a biology lab. Chemistry labs-in-a-box have been the inspiration for many young chemists, but there are remarkable differences between a chemistry lab and a biology lab which are explained in the Youtube video linked above and embedded after the break.

If you are preparing to start a laboratory or wondering what to add to your fledging lab, this video is perfect. It comes from the perspective of a hacker not afraid to make tools like his heat block and incubator which should absolutely be built rather than purchased but certain things, like a centrifuge, should be purchased when the lab is mature. In the middle we have the autoclave where a used pressure cooker may do the trick or you may need a full-blown commercial model with lots of space and a high-pressure range.

Maybe this will take some of the mystique out of starting your own lab and help you understand what is happening with a gel dock or why a spectrophotometer is the bee’s knees. There are a handful of other tools not mentioned here so if this is resonating, it will be worth a watch.

Interfacing Philips Hue Lights With Everything

อังคาร, 12/11/2018 - 13:00

The Internet of Things is eating the world alive, and we can’t buy incandescent light bulbs anymore. This means the Internet is now in light bulbs, and with that comes some special powers. You can turn lights on and off from a botnet. You can change the colors. This is the idea for the Philips Hue system, which is well respected among people who like putting their lights on the Internet. There are other brands — and you can make your own — but the Hue system does work pretty well.

This is what led [Marius] create the software to interface various electronics with the Hue system. It’s a project called diyHue, and already there’s a vibrant community of devs creating their own smart lights and connecting them to the Internet.

The software for this project is built in Python, and is designed to run on certain single board computers. This allows the SBC to connect to the Hue bridge so Hue bulbs can be controlled, a MiLight hub so MiLight bulbs can be controlled, or, with the addition of a ZigBee radio, all those ZigBee devices can be controlled. Right now the only thing that doesn’t work is Google Home because it requires a remote API, the Home & Away feature from the Hue app (again, remote API), and the Eneco Toon.

There really are a fantastic number of devices this software works with, and if you’re building out your Internet-connected home lighting solution, this is one piece of software you need to check out. Thanks to [cheesemarathon] for bringing our attention to this. He also liked it so much he’s now contributing to the GitHub. Very cool.