There is a reason the HYPER lab is the only academic research lab in the US focused on cryogenic hydrogen: it’s hard.
Recall that hydrogen:
has the largest flammability limits of any gas (4-94% in air by volume).
has a very low energy barrier for combustion in air (a grain of sand in a jet has enough kinetic energy to ignite).
has the highest combustion energy of any fuel by mass (119.96 MJ/kg).
has one of the lowest boiling points of any fluid (boils near -421°F), highest thermal conductivities, and the highest latent heat (energy required to boil the fluid) per mass (420 kJ/kg) of any cryogen. This combines to mean that hydrogen probably has the highest ability to give you cold burns (frostbite) of any fluid.
has the largest liquid to vapor volume expansion ratio of any fuel. The room temperature gas occupies 780 times the volume of the liquid at atmospheric pressures. If left confined in a sealed vessel, this expansion will cause the pressure to rise well over 27,000 psi.
has the lowest mass of any atom or molecule, giving hydrogen the largest thermal de-broglie wavelength, or ability to tunnel through things (i.e. leak), of any atom or molecule (~1 nm near 20 K).
has the highest ability to embrittle (weaken or even revert to powder) many materials by chemically reacting with elements in alloys.
To make maters even worse, there is little to no information available on cryogenic hydrogen embrittlement of materials. So what are we left to do?
Know what works & know the embrittlement mechanisms
We’ve ran cryogenic hydrogen experiments for years, at low pressures, using copper, aluminum, and brass plumbing materials. Hydrogen really doesn’t chemically react with the elements in those pure metals or alloys. That’s fine until you have to go to higher strengths and pressures — then things get tricky. There are two forms of embrittlement that must be considered at cryogenic temperatures: cryogenic phase change and hydrogen attack.
One of the better books on construction of cryostats (cryogenic vessels for experiments) I’ve seen is Jack Ekin’s Experimental Techniques for Low Temperature Measurement. Ekin was a cryogenics researcher at NIST for decades and now maintains the researchmeasurements.com website. Figure 6-20 from his book, available on the website here. Shows the low temperature fracture toughness of various cryostat materials. As you can see, most high-strength steels and titanium allows go through a glassy phase transition that reduces their toughness to ~25% of room temperature values. Pretty much any steel that is designed for strength goes through a huge reduction in toughness below 100 K. A key exception, Stainless Steel AISI 316 which actually gets significantly stronger at cryogenic temperatures.
The second filter is resilience to hydrogen attack. Sadly, little to no data is available on this at cryogenic temperatures. This is likely because everything is slower and the chemical reaction barriers much higher at cryogenic temperatures, so hydrogen attack mechanisms are mostly damped. However, nearly all cryogenic vessels are at room temperature at one time or another between tests, causing them to be exposed to hydrogen and placing a qualifier on their strength. NASA recently published an excellent report overviewing hydrogen embrittlement in materials. The end result? Nickel content is key. Very little to no reduction in yield strength is observed for steels with nickel content higher than 12.5%. Stainless Steel AISI 316 is one of the few steels with high nickel content, though tends to be expensive.
In the end, if you’re unsure and the design is key — best to call a professional. The HYPER lab is in the process of setting up a fatigue load frame for cryogenic hydrogen testing that will be available for testing cryogenic hydrogen embrittlement of novel materials.
It was a revelation. It was so simple. How could I have waited until graduate school to read this?
I called my dad to see if he knew about it. “Oh ya, your mom and I both attended trainings by him when before you were born. Definitely influenced how we raised you.”
If I had one piece of advice for learning to communicate and be successful in your careers it would be to read Dale Carnegie’s classic, “How to Win Friends and Influence People.” Here’s an excerpt about writing a letter:
“Dear Mr. Vermylen,
Your company has been one of our good customers for fourteen years. Naturally, we are very grateful for your patronage and are eager to give you the speedy, efficient service you deserve. However, we regret to say that it isn’t possible for us to do that when your trucks bring us a large shipment in the late afternoon, as they did on November 10. Why? Because many other customers make late afternoon deliveries also. Naturally, that causes congestion. That means your trucks are held up unavoidably at the pier and sometimes even your freight is delayed. That’s bad, but it can be avoided. If you make your deliveries at the pier in the morning when possible, your trucks will be able to keep moving, your freight will get immediate attention, and our workers will get home early at night to enjoy a dinner of the delicious macaroni and noodles that you manufacture. Regardless of when your shipments arrive, we shall always cheerfully do all in our power to serve you promptly. You are busy. Please don’t trouble to answer this note.
Granted this is from 1930, but the style of Dale Carnegie has some important take aways. What is he doing that relates to our three rules for engineering communication — Relevance, Credibility, and Efficiency? Let’s apply this to three example emails I’ve received over the years: Email examples After reading those two examples, which email do you think I replied to? Why?
You’ll find that to be relevant takes research. To be credible takes connections. To be efficient takes practice.
“You can close more business in two months by becoming interested in other people than you can in two years by trying to get people interested in you.” ~Dale Carnegie
I had the wonderful opportunity to present to the WA Senate Transportation sub committee on hydrogen technologies today. The link below includes a ~20 minute video that is a nice primer to what we do and why:
Where do you look when you walk into a space for the very first time?
The majority will look down at the floor to make sure you don’t trip and hurt yourself.
The architect who designed the space looked down too, at the plans and scale models.
But do you ever look up? In the place nobody tends to look? Did they think about this place that nobody tends to look? This place, that nobody tends to look, is it a blind spot?
You might ask yourself several more questions:
Where am I and is this place safe? (Survival)
Who’s here? (Tribal)
Who’s in charge? (Authority)
What are the rules? (Legal)
When and how can you use this place? (Performance)
Where and what is the legacy? (Community)
Why is it like this? (Systemic)
Now walk into a space you know well. See it again for the first time. Go through the list.
That’s the thing about blind spots – none of them are initially blind – we’re conditioned to blindness – to value certain spaces and things above others – which dramatically shapes the outcome – and the most sensitive time to influence these values is to start at the initial condition.
What are your initial conditions? What do they say about your values? What’s missing?
Over the last four months the HYPER lab has developed a Memorandum of Understanding (MOU) to help establish the initial conditions for new lab members, specifically in the Legal/Authority value dimensions. Here’s the draft: HYPER Lab Member MOU 2019.
That’s the thing about initial conditions – they’re only initial. Although you may initially get them where you want to be, the physical reality of the environment/problem may dampen things quickly. Then you’ll know where the problem is.
While we’re at it — props to the new maps and Robert B. Stewart Student Accolades Foyer at the main entrance to the VCEA complex. Everything about the space implies that from here you’ll go places.
“In a free society, you get what you celebrate.” – Dean Kamen (a mechanical engineer)
We’ve all been in the room when that one Husky fan blurted out, “I don’t know why anyone would go to the Palouse. It’s a desert.” Or that one new department chair that said, “I’ve seen what the region has to offer,” before quickly leaving. For everyone else, who are still open to discovery, this list is for you.
Before we start the list let’s get our bearings straight. The Palouse is the local name for a distinct climate region in Southeastern Washington characterized by low grassy hills. The origin of the regional name ‘Palouse’ is unknown. It literally translates in French to a word for a region of low grassy hills. However, the native tribe of the Palouse referred to themselves with a similar sounding name. Regardless, many settlers, including my ancestors, stayed here because of the incredible potential and character of the region. We’ll start at the top and work our way down.
1. Magic Mountains
If you don’t know why the natives considered the Kamiak and Steptoe Buttes as “Magic Mountains” then you need to go see them for yourself. It might be that Steptoe is literally a giant mountain of quartz in an ocean of basalt. It could be that Kamiak turns gold in the spring with the Balsam bloom. Regardless, knowing the history of these Buttes and how they were set aside as parks is part of understanding the Palouse. Read the story of how Virgil T. McCroskey created Mary McCroskey State Park sometime to learn about one of the region’s true heroes.
There is a scene towards the end of “Ready Player One” where the team flies to the mansion in the Wallowa Mountains that flank the southern boarder of the Palouse in Oregon — it’s called the Eastern Swiss Alps for a reason. For the life of me, I can’t find my photos of the Wallowas. It might be that the image is so ingrained in my memory that I believe they are images, but never took them. There was a moment on one drive back when we stopped outside Flora to take in the view. The snow-capped Wallowas to the south, the Blue Mountains to the west, and the Grand Ronde canyon below… It gives a person perspective, like this shot from the Bald Mountain Fire Lookout in the Hoodoo Mountains.
2. Hell’s Canyon
The Southern boundary of the Palouse is defined by Hell’s Canyon — the deepest river gorge in North America at 7,993 ft — formed between the Wallowa and Seven Devil Mountains. As you’ll find in this post, settlers to the region were pretty straight-up when naming things. My Grandfather grew up herding cattle in Hell’s Canyon before World War 2, after which it was outlawed. He would tell stories of outlaws who’d escape into Hell’s Canyon to hide their loot, catching gigantic sturgeon with lamb legs on set-lines, building many of the Fish and Game cabins you can stay in today… I came to know the draws in the canyon by the number of snakes they could support.
While there’s many ways to experience Hell’s Canyon, there’s no right way. You can arrange white-water rafting trips through UREC, or rent a jet boat ride if you’re not feeling adventurous. One thing to know, this country is as big up and down as it is wide. Roads are built where and if they can be, so expect to have to get off the beaten path to get the real experience.
3. National Forests and Streams
The Palouse is surrounded on three sides by the Colville, Couer d’Alene, St. Joe, Clearwater, Nez-Perce, and Wallowa-Whitman National Forest Districts. Together they make up the largest unbroken stretch of National Forest in the lower 48. Each one has incredible trout fishing streams and rivers — the Lochsa, Selway, Clearwater, St. Joe, Couer-d’Alene, St. Maries — it doesn’t matter, pick one in late July through August, find a soft sandy beach, and sit in the crystal clear water. The water is so clear you can see trout behind nearly every boulder with a pair of polarized glasses.
Just east of Moscow and Pullman you can go to the Hobo and Perkins Cedar groves, as well as see the Giant Cedar — the largest tree east of the Cascades in North America. It’s not the one in the picture below. I won’t take the thrill of it away from you.
4. Grand Coulees
The Palouse is probably home to the highest percentage of population in the world that knows the definition of the word Coulee: a deep ravine or lava flow. The Northwestern border of the Palouse is lined with many of these Coulees formed by the great Columbian floods which occurred at the end of the ice-age. Large ice dams would build up in Montana and suddenly break free, washing the Columbia basin clean and carving giant ‘scabs’ in the basalt bedrock. One particularly amazing coulee — the Grand Coulee — is now the site of the dam of the same name. This is an incredible hydropower project that uses the Bank’s lake reservoir as a giant pumped hydropower water storage system. Below Bank’s lake is Sun Lake’s state park and Dry Falls — which would’ve been an amazing water fall during one of the floods. You’ll come to know the key geology of the region — Columnar basalt – giant hexagonal columns up the sides of the rock cliffs, amazing. Bowl and Pitcher state park down river of Spokane is a great place to gain an appreciation for this geological formation. Some of the waterfalls in the region are still active, Palouse Falls, Elk Creek Falls, Train Trestle Falls, and many more. One falls into the northern end of Rock Lake in the Northwest corner of Whitman County — a favorite fishing spot. There are unmarked places in the lake where 200+ feet deep rise to 2 feet in an instant so be careful if you go.
You can thank the Palouse for your pulses and daily breads — Whitman County has been the number one wheat producing county in the US every year since 1978. This also comes at a price — the native Palouse ecosystem is one of the most threatened on the planet, having ~99% converted into farm ground. You can find it if you go looking, Klemgard Park, Rose Creek preserve, the WSU native arboretum, and the Snake river breaks are places to start.
Go on a good morning in early spring or late September to a native Palouse remnant, ideally with some Basalt outcropping. Wait for the sun to warm the rocks and earth. Smell the earth. That’s terroir. You’ll pick it up from wines grown in the LC-Valley AVA. I can’t stop smiling when I taste it.
And if you’ve never smelled a pail full of fresh picked huckleberries, I’m not sure you’ve lived.
6. The Universities
The Palouse is the grand connector between all of these amazing places and things. And a cultural role has emerged much like it’s geological bedrock. The Palouse has two Land Grant Universities right in it’s heart: Washington State University and the University of Idaho. People come to these schools to gain the foundational knowledge to connect and propel them into life-long careers. The cycle of students brings a continual renewal of energy and spirit, much like the green grasses in the spring as they mature to brown in the fall, only to begin anew in the spring.
It’s probably this character that has made Moscow the #1 city in the State of Idaho to raise children, Pullman the #1 in the State of Washington. You don’t get both by accident. It’s also why Pullman is a top ten location in the US to start a business. And it’s probably why I chose to come back here for my career.
While this list is by no means complete, it’s a place to start. Much like the universities, the Palouse won’t simply give up it’s riches, but somehow, along the way, you end up changed and more mature — a certain character.
Talk to any researcher and they’ll go on ad nauseum to explain their philosophy for building brilliant teams. It’s just the next step after how to reliably get brilliant students. So why am I adding to the noise with this post? Because when you’re in the middle of building something great, it’s easy to get side tracked and forget your core values and principles; whatever they be.
Think back about the amazing teams you’ve been fortunate to be a part of over the years. Several key factors were likely involved:
Contrasting and complimentary characters — think the A team, Star Trek, X-men, you get the point — we all have our unique set of strengths and abilities. In my field of engineering research the core character set is much like the crew of the Starship Enterprise: the driven leader (Captain Kirk), the stoic smart one (Spock), the handy utilitarian (Scotty), the stubborn sage (Bones), and the master communicator (Ohura). Most brilliant teams are probably a distribution between 3-8 core contributing members with the optimal likely being 5 — which is probably an artifact of how our brains process communication streams. We can only hold so much ram for characters/roles/functions before we forget and side groups start forming. The goal is a gestault team — one perceived to have collectively emerged as something different, and typically better, than the sum of it’s parts. I’ve seen it happen when teams are self aware, empathic, performing in their element, and laser focused. It’s what brilliant teams do.
Focus and pressure to achieve a simple, common goal — doesn’t get much simpler than beet the bad guy and save the world. Win the championship! The common thing in academia and US governance is to waterboard teams with many statistical performance benchmarks. NOISE! Pretty soon the team’s lost site of the original simple goal. Performance indicators matter, just indirectly. Administrators often talk in terms of general performance indicators to the team out of convenience, which causes the individuals of the team to think the statistics are the point. Then the organization starts devolving into the metrics game while failing in it’s core goal. This is more of a problem for administrators. How do you as an an individual contribute to the common goal? Just keep tenaciously improving yourself and your ability to fulfill your role.
Carefully managing expectations — ever heard the story of the team that nobody expected to do anything, only for this underdog to triumph over insurmountable odds? It’s an age old story we’re suckers for, likely because it’s a key indicator of change – which is amazing. That said, how many teams declare their goal to win the championship on day one? How many faculty declare their goal to get tenure on day one? How about we manage our expectations a little and just expect to do our best, evaluate, and continuously improve? Short of cheating or quitting that’s really about all a team or an individual can do.
There really isn’t any other secret to the success of brilliant teams than to work as fast as you can. Therein lies the magic of brilliant teams — they are fun, engaging, and often life changing. They weren’t born or bought that way. They had a great set of initial conditions and became brilliant over time. If you’re already on a brilliant team enjoy them while you have them. In the interim, better start building!
Now that we have a framework for both social thermodynamics in equilibrium and in non-equilibrium transport we have an interesting opportunity to test the consistency of both through the time domain. This is enabled by the correlation between thermodynamic and transport properties — one of the greatest unsolved challenges in thermophysical properties is a direct derivation of transport properties from thermodynamic properties. Only recently has the residual entropy — the entropy that emerges due to real fluid intermolecular exchanges — been shown to be a powerful scaling tool to help with this challenge. This observation seams obvious in social space as the empathy that emerges during group exchange is powerful for efficient communication.
The diffusion properties directly compare thermodynamic and transport properties within a single variable. Through a juxtapose of thermal property trends with transport property trends, and comparing the combination, we may gain new insight on whether this framework transformation is remaining useful.
The Basics of Thermal Diffusion
One of the more interesting and important problems in my traditional cryogenics research is understanding how thermal diffusivity affects the time it takes a thermal wave to propagate through a material, also known as the thermal diffusion time constant. Thermal diffusivity (α) is a very interesting property that combines both thermodynamic properties (density ρ and heat capacity Cp) and a transport property (thermal conductivity k) via the equation:
α = k / ρ Cp
Thermal diffusivity, like all diffusion coefficients, has units of (m^2/s). From this you can calculate a parameter known as the thermal diffusion time constant (tau) for the time it takes a thermal wave to move through a length (L) through a bar of constant cross sectional area of uniform material via the equation:
In other words, it’s the time it takes the change of some external condition (in this case temperature) to propagate through and be felt on the other side of something. With this equation you can quickly estimate the time it will take a system to respond to a step change in temperature at a boundary and it’s sensitive! For example, the difference in thermal diffusivity between copper and plastic at cryogenic temperatures changes a 10 minute equilibration time to a 19 year(!!) equilibration time — so many stories…
How can that happen? Here’s a figure that shows how heat capacity (left panel), thermal conductivity (center panel), control the thermal diffusivity (right panel). These are from Jack Ekin’s book, which I can’t recommend more highly, on cryogenic materials: http://www.researchmeasurements.com/figures/6-3.pdf. Plastics retain a large heat capacity at cryogenic temperatures because of the very long polymer chains with many small ways to store energy within the chain bonds. However, the thermal conductivity and ability to transfer that energy from one chain to another is very limited due to the irregularity of the chains. This combo leads to a low thermal diffusivity for plastics. Metals are much more simple and aligned with generally lower heat capacities, however the thermal conductivities can actually increase due to harmonic resonance (phonon) transport. This combo leads to a very high thermal diffusivity for pure metals.
What this means in the social domain
The book I’m compiling on social dynamics inspired by thermodynamics (here) has already established how values and temperature combine to cause the capacity to decrease with temperature (resources) and vMeme. My recent post on social transport mechanisms also shows that the number of diffusion mechanisms available decreases with temperature (resources) and vMeme. The question becomes the rate that these two parameters change relative to eachother with decreasing temperature. We don’t have equations of state that fix these trends yet. So we still need to use analogies.
A polymer chain molecule is analogous to a faculty member at a university — absolutely loaded with knowledge and information so as to yield a high capacity, perhaps so much so that it’s difficult for other molecules to relate and connect, in other words a low residual entropy/empathy and associated conductivity or transport value. This combination causes information and changes in boundary condition to diffuse incredibly slowly through the group.
A metal atom is analogous to a youth in a boarding or military school — relatively little knowledge (loaded with potential though!) so a fairly low capacity, however, is in a class among very homogeneous peers that know how to line up and speak on command, which are also united for a common cause (graduate). It’s like a crystal — very little room to move but when somebody says the head is mad, everybody knows what that means and fast. This leads to a higher residual entropy/empathy and associated conductivity or transport of info through phonon (acoustic vibration) resonance. This combination causes information and changes in boundary conditions to diffuse incredibly fast through the group, albeit over small ranges in temperature (resources).
Now let’s consider another class of material — high entropy metal alloys. This is a new class of materials emerging with very interesting properties for resilience. The closest material analogy in the above graphs would be stainless steel. You create these alloys by mixing many different atom types together, and creating crystal structures that maintain a balance between the constituent properties. The result? Although a decent heat capacity, a very low thermal conductivity that leads to a very low diffusivity close to plastics. The analogy that came to mind was the telephone game you play in elementary school where you start a message on one end of a line and watch how it changes when it comes out the other end. If you lined up a set of identical quadruplets and asked them to play the telephone game, you’d likely not have much of a game. Maximize the diversity of the group (analogous to a high entropy alloy) though and you’re bound for some fun. Which group, the quadruplets or the diverse group, are more likely to remain resilient in the face of an unknown stresser/challenge? My money’s on the diverse group — as long as the size of the group is not so large such that the slow information diffusion does not become the sensitive parameter.
“Whether it be the sweeping eagle in his flight, or the open apple-blossom, the toiling workhorse, the blithe swan, the branching oak, the winding stream at its base, the drifting clouds, over all the coursing sun, form ever follows function and this is the law.” — Louis Sullivan 1896
How Thermodynamic Laws Shape Structures
The challenge any engineer faces is the optimal form for a design. Why is a try shaped like a tree? And why does this look like a river delta, or a lung, or a neuron?
In the 1990’s mechanical engineering professor Adrian Bejan developed the “Constructal Law of Thermodynamics“. Bejan concluded that entropy generation causes design structures to evolve in order to maximize flow. In the case of the river delta, and lung above these tree-like structures are all maximizing flow of mass, and thereby entropy generation. Diffusion constants for Mass, Momentum, Heat, Chemical potential, and Electricity (basically transport of any physical phenomena) are all coupled to the entropy generation term through Onsanger’s non-equilibrium thermodynamics. By constructing an entropy balance you can analyze the generation term to see that any flowing system will maximize the entropy generation term through readily calculable branching ratios. Bejan was able to construct computer algorithms that could predict the shapes of trees and other flow systems, ultimately concluding that this meant tree-like structures are optimal — which very well could be the case for flow systems based on mass.
Look at the universe though, and we realize very quickly that all structures in nature do not evolve into tree-like hierarchies. Planets, solar systems, galaxies, and social networks have very different structures as they do not rely primarily on continuous mass transport. Yet the thermodynamic laws are universal. So how can we use these laws to understand something like the shapes of social structures?
The structure of social networks
Social networks are built upon information exchange/diffusion mechanisms. All information diffusion mechanisms have varying degrees of strengths and weeknesses, which result in blind-spots or information that the network just misses. So it can be seen how these information exchange mechanisms, in-turn, shape and structure the social network, which creates a resource hierarchy for the network, which in turn shapes the network’s values.
Hear’s a graphic for how these information structures have evolved hand-in-hand with value Memes over time.
What you can see from the figure above is that each information storage mechanism works for a particular spiral value meme (which the corresponding network maps are from): 1. Survival, 2. Tribal, 3. Authoritarian, 4. Legalistic, 5. Performance, 6. Communitarian, 7. Systemic, and 8. Global/Holistic. Remember that all of the mechanisms/value levels matter and are useful in specific situations, yet evolved to serve differing needs through time. Each one of these information diffusion mechanisms likely has it’s own diffusivity property; similar to the mass, momentum, chemical, thermal, and electrical diffusivities in traditional thermodynamic space. It’s going to take some work to calculate these information diffusivity values for every instance or case.
From the structure of thermodynamics, and with the assertion that empathy is the social equivalence of entropy, we can see that social information structures can evolve from the laws of thermodynamics similar to mass based diffusion structures. As your social network evolves, you’ll get good at optimizing a particular value set, only to realize that the values your particular information structure has evolved for start missing another set of values that have become the sensitive parameter(s), and you’ll start structuring information to adapt to that new value set. This gets at the sophistication vs. evolution problem — you can become increasingly specialized and sophisticated in a particular value set for a particular problem but at some point you need to evolve to a new paradigm — it’s local vs. global optima. As the information diffuses, empathy generation occurs and increases as we seek to maximize the flow of information.
So the take-away — empathy, like entropy, is key to the transport and diffusion of information in a directly analogous way to the transport and diffusion of energy and mass. This transport and associated entropy/empathy generation creates the associated information and physical structures we see and use everyday. Every structure has it’s limits, and hence a resource and value hierarchy emerges to correspond to the structure. This is going to take awhile to fully unpack…
Several friends have been asking me to comment on a recent article from Wired Magazine titled, “The Genius Neuroscientist Who May Hold the Secret to True AI.” The article is about Karl Friston’s “Free energy principle” which is essentially that the purpose of life is to minimize the free energy — defined qualitatively as the difference between your expectations and your sensory inputs. The secret, according to the article, is applying thermodynamic principles to intelligence. For any of you following these posts that comes as no surprise. The timing of this article is convenient as I’ve been waiting for awhile now to write what the social thermodynamic laws say about efficiencies. So here goes…
Thermodynamic “Free Energy”
There are many energies we utilize in thermodynamics: potential, kinetic, internal, Gibbs, Helmholtz, Landau, etc. Several of these energies are often described as “free” which may be one of the greater points of confusion in all of thermodynamics. One of the first lessons Richard Jacobsen taught me is a graduate student was how silly this “free” word really was. This is not “free” as in it doesn’t cost anything, “free” here denotes that the energy must be defined relative to a reference state that is “free” to set at an arbitrary value as there is no way of making an absolute measurement of the particular energy form in question. The variability of reference state is a common problem that plagues folks learning thermo for the first time — they’ll often mix property values from different sources not realizing that the reference points were changed. Although the reference points can change, one lesson I hammer home to my thermo students is that the change in a “free energy” property must be identical for the same process, regardless of reference state; i.e. it will always take the same amount of energy to boil a cup of pure water from liquid to vapor, regardless of the reference state. Since you have to use a reference state to calculate any of these energies anyways, why further obfuscate the problem with the use of “free”? No wonder everybody has a hard time with thermo.
Some Efficient Comparisons
What Friston is probably looking for is something akin to a Thermodynamic Efficiency. Thermodynamic Efficiencies were created as useful comparisons between energetic processes. A thermodynamic efficiency exists for each of the thermodynamic balances (a.k.a. thermodynamic laws):
First Law Efficiency: What you want divided by what you paid to get it. Typically W_out/Q_in for a heat engine. Q_out/W_in for a refrigerator, etc.
Second Law Efficiency: What you got divided by what you could’ve. Typically the W_actual/W_ideal for a heat engine. A second law efficiency can also be obtained by multiplying the first law efficiency by the Carnot efficiency ( the ideal or the limiting efficiency defined by the temperatures of the thermal reservoirs) for a process.
Friston’s challenge of resolving the difference between what we actually see and what we expect to see can be explained by the Second Law Efficiency transferred into the framework of Social Thermodynamics (the body of the book I’m writing on this is down the page here). The perfect or ideal case allows no entropy generation (remember I assert that entropy is empathy in social space). This requires that you already know everything about the process and no further empathy generation can occur. This is also the case in which you can maximize output, because you know everything. Reality though is imperfect. We never know everything and as such what we actually get is far from ideal. Hence empathy generation occurs and we learn a little bit more about the process each time.
First law efficiency is all about the energies — you put a certain amount in and you expect a certain amount back. You never get the same amount or more back, and some processes have a much lower return than others (Facebook anyone?). You don’t know why until you look at the second law efficiency and realize that the process was inherently limited to a low return due to entropy/empathy — you couldn’t have gotten much back. Herein lies an important take away.
Efficiency really is the key to Happiness
Happiness you elusive emotion… several recent self-help authors and researchers have taken on the challenge of happiness. Happiness can be thought of as a first law efficiency in social space — we pay a certain amount in and we want and expect a certain amount back. We often think about what we put into a process as what we have control over, but we really should also think about managing our expectations. In many processes the ideal/limiting efficiency is so low that our hopes are likely to be dashed unless we consider these limits. It may not be possible to realize the return we seek given the social mechanism we’re working through.
But what about luck? Can’t we be suddenly surprised by winning the lottery/raffle? Absolutely. It’s a statistical process. All of the players in the system have a probability distribution. Given enough trials though and the thermodynamic limits/laws hold. We can get lucky, but in the universe there’s no such thing as a “free energy” or a free lunch. Same goes for social space.
Structure versus unstructured — it’s the age old debate in education. It’s popped up recently in my department, my lab, and my family. While thinking about this in passing I had a major shift in how I view the problem. Hopefully it will change how you consider the problem too.
The problem is best exemplified for engineers by the traditional mathematics curriculum. Anybody that’s had a calculus class knows that the textbook is packed with equations that you, through repetition, are supposed to derive the solution for. Nobody has any clue where the starting equations come from, what they are connected to, or why they matter. The instructor assigns a few exercises that they find interesting, the solutions are kept by the instructor in a solutions manual, you do the exercises, submit your work, and get judged on accuracy. It’s been that way for over a century.
Times have changed.
When I was a student the tradition was that as soon as a textbook came out, the solution manual would be posted to some back-channel website and everyone would have the solution. This started as I was graduating college and grew to a terrible state where nobody had any idea how to do any work because they’d just record the solution from the manual. It’s just nature to take the path of least resistance.
Since I started at WSU I’ve worked hard to break this cycle by using software to come up with my own original homework assignments with motivation and inspiration coming directly from the research in my laboratory — which I happen to find fun. Only solution is in my head. However, this semester’s class surprised me. I would make up a homework assignment (literally about one of my old cars) and within 30 minutes of it being assigned someone from the class would post it to chegg.com. The irony is that even I had not made up a solution to the problem yet!
Thinking back on my math days as a student — I really didn’t understand differential equations until my graduate level heat transfer course from my advisor Greg Nellis at Wisconsin. In the class he’d give us a situation — a temperature or heat transfer boundary condition and ask for some kind of solution based on a typical problem that would come up in the field. We had to derive the sets of equations that governed the problem from scratch! The challenge was to use your judgement to include only the physics that the problem was sensitive to. Once you had the governing equations derived, we’d just plug it into software like Wolfram Alpha or Maple which would do the math to spit out the solution from those already known to humanity. The machines are good at this. Graph the solution to solve initial customer requirements. Voila! Sadly though, this heuristic approach to setting up problems is not typically emphasized by engineering curricula; with notable exceptions like Olin College.
What I’ve noticed recently is the difficulty that students have transitioning from the structured traditional algorithmic problem solution based paradigm to the less-structured heuristic problem formulation based paradigm. I’ve had several students who are absolutely brilliant with traditional algorithmic problems, only to become unhinged or unmoored when freed from the algorithmic structure. Change the situation and your students totally forget to apply the basic rules and processes you’ve taught them. Lord of the flies…
It reminds me of a problem from my childhood.
From age 6 through probably 13 I was involved with a 4-H dog obedience club. Through repetitive use of treats and a “choker” chain (carrots and sticks), you can teach a dog to walk along side you, sit when you stop, stay when commanded, and many other exercises in “obedience”. I’d be able to train a dog well, win awards, etc. It’s really not that hard it just takes time. I learned to exercise restraint with authority after choking my dog once in front of an audience — one of a handful of griefs I have in life. What I noticed though was that, despite being very obedient, when my dogs got out of the yard, or the leash came of, they’d lose their minds. Really, they’d take off, not listen to anything, pee all over everything, run out into traffic, etc.
Watch a dog trained in the woods without a leash and the situation is very different. They know to stick within earshot because they could get attacked by an animal, left, lost, etc. They can govern and bound the problems for themselves and improvise solutions when needed. That said, they probably don’t know when to sit when they should, may have problems staying on command, and probably wouldn’t do well in a city with a leash law.
What you can see from this comparison is that neither approach to education — algorithms or heuristics, structure versus unstructured, problem solution versus problem formulation — is optimal for all situations. The challenge is a healthy contrast that teaches people when and how to ‘shift gears’ between these paradigms based on the environment and situation.
If we rebalanced curricula to promote the foundational algorithmic solutions in morning classes as predicated by real industry sponsored design problems in afternoon classes, a process known as pedagogical scaffolding, we may get to where we want to be. Websites like Chegg.com and others would become completely silly in the eyes of the students relative to professional faculty backed by industry. They’d be well on the path to proficiency. See this post from 3 years ago for more info: https://hydrogen.wsu.edu/2015/10/09/the-grand-challenges-of-restructuring-engineering-departments/
And, unlike the old adage, I know first hand it’s totally possible to teach an old dog new tricks. You just have to begin.