Predicting Melting Points.

One feature I have wanted to add to ChemSi is the ability to have state symbols - that is, instead of outputting

C3H8 + 5O2 -> 3CO2 + 4H2O

It would instead output

C3H8(g) + 5O2(g) -> 3CO2(g) + 4H2O(l)

In order to do this I needed some way to predict the melting and (eventually) boiling points of different compounds, given only a little information about them.

My initial solution was to simply use enthalpy and entropy changes with the Gibb's equation. If I could get the entropy change from solid to liquid, and the enthalpy change, then I could use

G=H-TS=0

H=TS

T=\frac{H}{S}

This solutions works - to a point. While for chemicals I have the entropy and enthalpy data for it works very nicely, I do not have the data for all states of all compounds - and it is unreasonable to get this. Additionally, this felt like a bit of a cop out. The point of ChemSi is to try and calculate (or at least, approximate) these values from easily accessible theoretical data, such as is done with the reaction prediction. If I wanted it to predict them excellently I could just provide it with some rules (ie, compound containing C, H, and O reacts with O2 to produce blah) or even worse, just give it a list of the reactions.

After some Googling, I came across the Lindemann criterion. Essentially it boils down to

T_{m} = \frac{2\pi mc^{2}a^{2}\theta^{2}_{D}k_{B}}{h^{2}}

Where the 'dodgy constants' are \theta_{D} (Debye temperature) and c. The rest are either pretty standard constants (the Boltzmann constant, for example) or pretty easy to get. The reason that the dodgy constants were a problem is that they require information on the geometry of the crystalline material - which was quite difficult to get access too.

Just a note on that crystalline material part - almost all of these prediction methods (bar the enthalpy and entropy change one) are limited to Ionic crystals, and are entirely useless for other molecules.

Following this, I searched around some more. I had the idea that perhaps the Lattice Enthalpy would have some relationship with the melting temperature - after all, surely a lattice with a more negative lattice enthalpy would be more strongly bonded so more energy would be needed to break it apart? I decided to test this hypothesis.

For a sample of ionic lattices, Na-Cs and F-I (Only using the group 1 metals and halogens may be a bit of a limitation, which led to me having to do some fiddling later on to fit the other groups. At this stage, however, I was looking for a relationship. Lithium is not included as bonds with Lithium tend to be more covalent in character.) I was able to produce the following graph. The data was taken from this website - I know it refers to Lattice Energy but from the explanation with it I believe they are referring to the Lattice Dissociation Enthalpy so it is still usable in this context.


lattices1

 

As you can see, this produces a reasonable correlation - the pairwise correlation coefficient calculated by Octave is 0.9300.

I decided to go ahead and use this lattice enthalpy method to predict the melting points. Unfortunately, as you may have expected, I ran into the same issue as the Gibbs method - I can't really store enough lattice enthalpies! Luckily, there is an equation called the Born-Lande equation

 E = -\frac{N_{A}Mz^{+}z^{-}e^{2}}{4\pi \varepsilon_{0}r_{0}}(1-\frac{1}{n})

This does not produce exact Lattice enthalpies - it is more of a prediction. This method is used in predict_mp_alg2().

I also decided to try and see if I could modify the equation used to calculate the shell energy that is already used for reaction prediction to approximate lattice enthalpy. I won't go into detail about it here, but it is given as predict_mp_alg1() in ChemSi if you want to check it out.

Just to show these equations working, I decided to predict the melting points of three compounds using both the Lattice Enthalpies (Born-Lande) and my shell energies. The compounds I chose were NaCl, FeCl2, and CsI.

Compound Actual MP (K) Shell Energy MP (K) Born-Lande MP (K)
NaCl 1074 1088 1122
FeCl2 950 993 895
CsI 894 1084 819

As you can see, both give a reasonable (+- 50) approximation for NaCl with the shell energy giving the best prediction; this is mirrored with FeCl2 where shell energy is closest (but both are spread further than for NaCl), and finally in CsI the Born-Lande approximation is closest, with the shell energy prediction being too far out. This is likely to do with the shell energy calculation getting less accurate at higher atomic numbers.

Both algorithms have been left in ChemSi so the differences can be seen.

Ecosystems.

Recently I was reading the excellent Serengeti Rules by Sean B. Carroll, and then this happened.

A few months ago, I wrote the population simulator that I wrote about here, but this purely dealt with so called 'bottom up' regulation - the primarly limiting factor of the population was the amount of food, and an artificial carrying capacity. While reading the Serengeti Rules, it got me thinking about 'top down' regulation - ie, big fish eats little fish, so there are fewer little fish to eat seaweed and other fishy things. I felt that the artifical carrying capacity was just that - artificial - so I set about trying to write a new simulator, which attempts to create a very simple (really simple) model of an ecosystem (more a food web).

Essentially, the model has three numbers - the population of a producer, such as a plant, the population of a primary consumer, such as a rabbit, and the secondary consumer, such as a fox. Each 'cycle' each of these changes - depending on which model I use I can make the producer either increase linearly (which allows both consumers to increase rapidly), be constant (in which case I get a realistic carrying capacity), or change based on a wave pattern (which sort of simulates seasons). This causes changes in the producer. Then, using a ratio of the prey to producers and a random number generator, it is decided whether the prey either reproduce (producer -2, prey +1), stay alive (producer -1, prey +0), or die (producer +0, prey -1). A similar procedure is done on the predators (secondary consumer).

The output of this program being run under different situations is shown below.


No Pred, Const food.
Sim 1; No predators are present, and there is a constant amount of food. The prey population (green) increases and then is relatively stable.


 

file2
Sim 2; Same as Sim 1, only predators are introduced. While in Sim 1 there was only bottom up regulation, in this one there is top down regulation too - the presence of the predators keeps the prey population at a lower level.
file4
Sim 3; Limited amount of food, small population of prey. No predators. Prey consume all the food, and starve themselves to death.
interesting
Sim 4; Prey, predators and large amount of initial food. Small growth rate of food. Initial large amount of food supports a large population of prey, but eventually much food is used up so the prey population slumps. This means there is less pressure on the food so it returns to the original levels.

 

Sim 5; Initial high concentration of food, so prey peak. Food is regulated by seasons and so when there is lots of food available, more prey are alive so the predator population increases.

This program can be found on my github. It is probably the least taxing of all simulations I have written so far.

Changes to ChemSi

Recently I chose to rewrite ChemSi in its entirety to clean up the code base and add some new features. Now, instead of only having a text based front end which limits the usability (so you can't have cyclic reactions, etc) and simultaneously makes it harder to maintain (I had wanted to use electron energies instead of electro negativities) I chose to write it in an object orientated manner. This enables it to be used as a Python module, as opposed to a program.

Using

import chemi

the module will be loaded, and commands can be used within it. For example;

import chemi

q = chemi.periodic_table["Na"].out(1)
print q["name"]
print q['shells']['3s0']
print q
s = chemi.Reaction(300)
q = chemi.Reaction(300)
s.reactants.append(chemi.Compound("NaBr", 0, 0, []))
s.reactants.append(chemi.Compound("NaBr", 0, 0, []))
s.reactants.append(chemi.Compound("Cl2", 0, 0, []))
s.predict()
print(chemi.output(s.return_reactants()) + " -> " + chemi.output(s.return_products()))
q.reactants = s.products
q.reactants[2] = chemi.Compound("F2", 0, 0, [])
q.predict()
print(chemi.output(q.return_reactants()) + " -> " + chemi.output(q.return_products()))

Will give the following output;

ben@Nitrate:~/Development/ChemSi$ python test.py
Sodium
{'energy': -1.5, 'number': 1}
{'molar': 23.0, 'electronegativity': 0.9, 'name': u'Sodium', 'shells': {'2p1': {'energy': -166.7, 'number': 2}, '2p0': {'energy': -166.7, 'number': 2}, '2p-1': {'energy': -166.7, 'number': 2}, '3s0': {'energy': -1.5, 'number': 1}, '1s0': {'energy': -1646.3, 'number': 2}, '2s0': {'energy': -275.5, 'number': 2}}, 'an': 11, 'small': u'Na'}
Cl2 + 2NaBr -> Br2 + 2NaCl
F2 + 2NaCl -> 2NaF + Cl2

Here I have done something which would've been impossible in a previous version of ChemSi. I have first of all used the new periodic table definition to return some information about sodium - the name, the 3s0 shell energy (kJmol^-1, approximated using the Rydberg equation) and then a general print out of it. I then go on to add some Sodium Bromide to reaction s, and some chlorine. I then get it to use my algorithm to predict the reaction products, before printing out the reaction;

Cl_2 + 2NaBr \rightarrow Br_2 + 2NaCl

Which it predicts correctly. I then move the products into reaction q, but replace the bromine with fluorine. I then predict them together, and I get;

F_2 + 2NaCl \rightarrow Cl_2 + 2NaF

These predictions are typically more accurate than the old model - for example;

FeBr_3 + Al \rightarrow AlBr_3 + Fe

In the previous ChemSi code this would've given AlBr and FeBr2 - which isn't accurate. So, in some respects it is getting better. On the other hand, because the algorithm I am using to fill the shell energies has faults with transition metals and elements with a 4s or higher orbital (it fills them going up in n,l,m ie 1s, 2s, 2p, 3s, 3p, 3d, 4s as opposed to the actual energies) it means that the above demonstration with NaI will not work accurately - the Chlorine is calculated to be higher and so is not substituted. Regardless, the new prediction engine seems to work better - fixes will be released in the coming weeks to fix these issues.

The code is already up on GitHub and along side it you will find a new class definitions file which explains each of the new classes.

Junk DNA

Price (as of writing): £8.99

Publisher Synopsis: 

For decades, 98 per cent of our DNA was written off as 'junk' on the grounds that it did not code for proteins. From rare genetic diseases to Down's Syndrome, from viral infections to the ageing process, only now are the effects and the vital functions of these junk regions beginning to emerge. Scientists' rapidly growing knowledge of this often controversial field has already provided a successful cure for blindness and saved innocent people from death row via DNA fingerprinting, and looks set to revolutionise treatment for many medical conditions including obesity. From Nessa Carey, author of the acclaimed The Epigenetics Revolution, this is the first book for a general readership on a subject that may underpin the secrets of human complexity - even the very origins of life on earth.

 

Thoughts

I originally purchased this book as a follow up to Epigenetics Revolution and immediately I noticed the similarities. Both are similar both in content and in style (which in all honesty is to be expected given they are about very related topics by the same author). Like with the Epigenetics Revolution I was impressed - Carey explained the main theories behind epigenetics well in a manner which was easily readable, and provided examples of different effects of the different modifications. The Agouti mice made a reappearance, however the explanation of these was very similar to in Epigenetics Revolution.

I found the information within the book interesting as it followed on from genetics at school in which introns are taught as being useless waste - Carey does a good job of dispelling this and providing an explanation of what they do and how they affect life - from telomeres and longevity to inactivation of the space X Chromosome in females by Xist and Tsix.

 

I am currently reading 'The Blind Watchmaker' by Richard Dawkins but will probably not write a review on this (it has been out for ages and is pretty well documented).

Entropy

Apparently next week actually means next month. Sorry for the delay, I have been busy!
Following the last post regarding entropy and Gibb's calculations I decided to change the aim of this to entropy. This was originally written as an essay for physics so please excuse any dodgy wording or citation marks. Furthermore, it is a 1st draft - so it may have mistakes and other issues. One good side effect of this is it has a proper list of sources at the end!

Entropy is a measure of the multiplicity of a thermodynamic system, or how disordered a system is - the higher the entropy, the more ways in which it can be arranged, and generally this means that it will be more disordered. Imagine a brick wall - it is ordered, so has a low entropy, but if knocked down it will become more disordered - it will have a higher multiplicity as the fragments of wall can be ordered in more ways than the original wall, and hence a high entropy. It is often easier to knock down a wall than build a wall, meaning that generally entropy increases. It can also be seen as the amount of energy unavailable to do work. If energy is stored in a low entropy glucose molecule it can be used to work. If the energy is spread throughout some paper as heat then it is much harder to be used to do work, and so the paper has higher entropy.
According to the second law of thermodynamics the entropy of the universe will always increase, and over time the multiplicity of the universe will increase. This means that entropy is one of the only physical quantities which works in a direction in time. If time travel ever occurred entropy would be running in reverse. For example, while a thermometer can go up and down, it would be very unlikely to see a smashed window suddenly reassemble, yet as entropy would be in reverse (ie decreasing) it would reassemble easily.
Due to the second law of thermodynamics predicting the increase of entropy it has an important role to play in the predicting how chemical reactions will occur. The first section of this report will concern itself with how entropy changes work, moving on to Gibb’s Free Energy, and the last section will be about the heat death of the universe and black holes.

Physical Uses of Entropy

The idea that entropy always increases may appear rather odd - if entropy always increases, how come water can be frozen? How come brick walls can be built? The reason these things can occur is due to the fact that it is the universe’s entropy, \Delta S_{univ}, which increases, not necessarily the entropy of the system, \Delta S_{sys}. As long as the sum of the change in entropy of a system and the change of entropy in its surroundings is positive, ie the entropy over all increases, it can take place. Therefore we can say that

\Delta S_{univ} = \Delta S_{surr} + \Delta S_{sys} > 0

And in order for a reaction, or physical action to take place, \Delta S_{univ} must be positive. For example, when freezing ammonia the reaction is exothermic, so heat is lost to the surroundings. This means that the surroundings get more energy and so become less ordered, therefore while the entropy of the frozen ammonia, \Delta S_{sys}, has decreased, in doing so the entropy in the surroundings has increased, meaning the overall change in the universe is positive. On the other hand, when ammonia is melting the \Delta S_{surr} is negative - it is endothermic so takes in heat. Luckily for the Laws of Thermodynamics, solid ammonia melting produces liquid ammonia, and the molecules in liquid ammonia can be arranged in more ways than in solid (they are free to move, so have higher entropy) - so \Delta S_{univ} is still positive.
This relationship is very useful for working out the temperatures at which reactions like this will occur. Another example may be the brick wall. In order for a brick wall to be built by hand, carbohydrates are broken down by the body which increases the multiplicity of molecules in the body - the multiplicity of the surroundings increases, so the total entropy change of the universe is positive.
In relation to solid ammonia and liquid ammonia, as above, entropy allows for the calculation of the freezing point of ammonia. The entropy change of the surroundings of a material can be calculated as follows;

\Delta S_{surr} = \frac{\Delta q_{surr}}{T_{surr}}

Where q_{surr} is the heat absorbed by the surroundings - the enthalpy change - (measured in Joules), and T_{surr} is the absolute temperature of the surroundings (in Kelvin). Typically, q_{surr} is given per mole which gives the entropy per mole. The reason T_{surr} is the divisor is because the hotter something is, the lower the change in entropy for a given additional amount of energy - imagine a brick wall being pushed over, and a pile of bricks being pushed over - the brick wall has a larger entropy change as it is going from very orderly to not.
Given that \Delta S_{univ} must be positive, this allows the calculation of the freezing/melting point of ammonia. From looking in the US NIST Chemical WebBook we can find that the entropy change of fusion (this is the entropy change when melting - fusion is another name for melting) for ammonia is 28.93JK^{-1}mol^{-1}. As we are looking at when it is freezing we can change the sign - the entropy change for melting is opposite the entropy change of freezing. We then need the standard enthalpy change of fusion , which the same data source tells us is 5653Jmol^{-1}. Putting this into the above formulas gets us the following;

\Delta S_{univ} = \Delta S_{sys} + \frac{\Delta q_{surr}}{T_{surr}} = -28.93 + \frac{5653}{T_{surr}}

And by plotting this we can see where the entropy of the universe is positive, and therefore the fusion point of ammonia.

Screenshot from 2015-10-08 18:32:37

As we can see in Figure 1, the entropy change of the universe is positive whenever T_{surr} < 195.4K. From looking in the aforementioned data book we find that the temperature of fusion, T_{fus} is 195.4K - so we have the correct point. This same relationship works for any other thermodynamic system.

Gibbs Free Energy

Gibbs Free Energy is a way of doing the same calculations as we just did to work out the fusion point of ammonia, only it considers purely the system and not the surroundings which makes it easier to deal with. The Gibbs Free Energy is in essence the amount of energy needed on top of the energy contained within the system for a reaction to go - so if it negative then the system has sufficient energy, and some free energy, however only works at constant pressure. The equation for Gibbs Free Energy is given as follows;

\Delta G = \Delta q_{sys} - T_{sys}\cdot \Delta S_{sys}

This is derived from the same equations we used previously by assuming that T_{surr} = T_{sys} - as both are in the same environment - and the knowledge that q_{surr} = -q_{sys} (because energy leaving the system goes to the surroundings). This allows the following to occur;

\Delta S_{univ} = \Delta S_{sys} + \frac{q_{surr}}{T_{surr}} = \Delta S_{sys} - \frac{q_{sys}}{T_{sys}}

And by multiplying this through by -Tsys we gain the following;

-\Delta S_{univ}\cdot T_{sys} = q_{sys} - \Delta S_{sys}\cdot\Delta T_{sys} = \Delta H_{sys} - \Delta S_{sys}\cdot \Delta T_{sys} = \Delta G_{sys}

Which gives us the Gibbs free energy. If this is lower than 0 then it follows that the reaction can is thermodynamically favourable and will occur spontaneously as there must be a decrease in the overall energy in the system for the reaction to be spontaneous at a specific temperature. For example, we can show the fusion point of Ammonia again;

\Delta G = -5653 - (195.4\cdot -28.93) = 0Jmol^{-1}

As \Delta G is 0Jmol^{-1} we know that this is the fusion point - if T_{sys} was any lower then \Delta G would be negative and it would freeze (given that these are the enthalpy change of freezing and the entropy change of freezing).

Heat Death of the Universe

Given that \Delta S_{univ} is always positive it follows that the amount of entropy, or disorder, in the universe is always increasing. This leads to a theory regarding how the universe will end because the increasing entropy implies that as time goes by the universe will be reaching its thermodynamic equilibrium and will reach its maximum entropy - at the moment it is not in equilibrium as it requires more entropy. At this point there will be no more free energy, as needed for spontaneous reactions and as such no more reactions will be able to progress, and as such the universe will die.

For an example of this, imagine you have a room into which you spray an aerosol. The fumes from the aerosol will diffuse into the room and you will eventually reach a point at which the aerosol has fully diffused and so there is no longer an area with a large quantity of aerosol and another without. In the same way, eventually it is theorised that the universe will eventually balance out and reach an equilibrium point at which point there it cannot do anything more.

Conclusion

Entropy is one of the major ideas from Thermodynamics - it is in essence, the 2nd Law of Thermodynamics, and therefore is very important for both physics and chemistry. Entropy plays a part in cosmology as shown in the section on black holes, and it could be a cause for the end of the universe - as the universe reaches thermodynamic equilibrium and no more free energy remains. In terms of chemistry entropy is very useful for calculating whether or not a reaction will occur.

References


[1]   
NIST National Institute of Standards and Technology
http://webbook.nist.gov/cgi/cbook.cgi?ID=C7664417&Units=SI&Mask=4#Thermo-_Phase


[2]   
ChemWiki Standard Enthalpy Changes of Formation http://chemwiki.ucdavis.edu/Reference/Reference_Tables/Thermodynamics_Tables/T1\%3A_Standard_Thermodynamic_Quantities_for_Chemical_Substances_at_25\%C2\%B0C


[3]   
ChemWiki Calculating Entropy Changes http://chemwiki.ucdavis.edu/Physical_Chemistry/Thermodynamics/State_Functions/Entropy/Calculating_Entropy_Changes


[4]   
4College Entropy and the Second Law of Thermodynamics
http://www.4college.co.uk/a/O/entsurr.php


[5]   
EntropyLaw Entropy and the Second Law of Thermodynamics
http://www.entropylaw.com/entropy2ndlaw.html


[6]   
LiveScience What is the Second Law of Thermodynamics?
http://www.livescience.com/50941-_second-_law-_thermodynamics.html Jim Lucas May 22,
2015


[7]   
HyperPhysics Second
Law: Entropy http://hyperphysics.phy-_astr.gsu.edu/hbase/thermo/seclaw.html


[8]   
HyperPhysics Entropy as Time’s Arrow
http://hyperphysics.phy-_astr.gsu.edu/hbase/therm/entrop.html


[9]   
Texas A&M University Thermodynamics: Gibbs Free Energy
https://www.chem.tamu.edu/class/majors/tutorialnotefiles/gibbs.htm


[10]   
Bodner Research Web Gibbs Free Energy
http://chemed.chem.purdue.edu/genchem/topicreview/bp/ch21/gibbs.php


[11]   
UCDavis ChemWiki Gibbs Free Energy http://chemwiki.ucdavis.edu/Physical_Chemistry/Thermodynamics/State_Functions/Free_Energy/Gibbs_Free_Energy


[12]   
HyperPhysics Gibbs Free Energy
http://hyperphysics.phy-_astr.gsu.edu/hbase/thermo/helmholtz.html


[13]   
How Chemical Reactions Happen James Keeler, Peter Wothers March 27, 2003


[14]   
HyperPhysics Gibbs Free Energy and the Spontaneity of Chemical Reactions
http://hyperphysics.phy-_astr.gsu.edu/hbase/chemical/gibbspon.html


[15]   
UCDavis ChemWiki Second Law of Thermodynamics http://chemwiki.ucdavis.edu/Physical_Chemistry/Thermodynamics/Laws_of_Thermodynamics/Second_Law_of_Thermodynamics


[16]   
A Larger Estimate of the Entropy of the Universe Chas A. Egan, Charles H. Lineweaver

http://arxiv.org/abs/0909.3983


[17]   
PhysLink What exactly is the Heat Death of the Universe?
http://www.physlink.com/education/AskExperts/ae181.cfm


[18]   
AskAMathematician After the Heat Death of the Universe will anything ever happen again?
http://www.askamathematician.com/2015/03/q-_after-_the-_heat-_death-_of-_the-_universe-_will-_anything-_ever-_happen-_again/


[19]   

On the Thermodynamic Equilibrium in the Universe F. Zwicky
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1085617/

ChemKit Developments

Recently I have been continuing development of ChemKit. It is now able to calculate Gibbs free energy and predict the temperatures at which different changes and reactions will occur, while also being able to calculate the atomic orbital energy levels.

In order to calculate the Gibb's free energy I required the entropy and enthalpy changes of the reaction. While it may be possible to calculate the enthalpy change by storing the relatively simple bond enthalpies (or using the atomic orbitals and working out the bond energies) it proved harder to do so for the entropy changes so I took the lazier option to just store the compounds in data tables. This allows the program to calculate the entropy and enthalpy changes, and hence predict the temperature at which a reaction will go using the typical Gibb's energy equation;

\Delta G = \Delta H - T\Delta S

This proved relatively useful. As shown below it allows for relatively accurate prediction of reactions;

&gt; set temp 300
&gt; gibbs H2O(l) -&gt; H2O(g)
Entropy Change of Reaction: 118.78Jmol-1K-1
Enthalpy Change of Reaction: 44.01kJmol-1
Gibbs Free Energy at 300.0K: 8.376kJmol-1
Will reaction go?: No
Temperature: 370.516922041K
ln K: -3.35980746089
&gt; set temp 400
&gt; gibbs H2O(l) -&gt; H2O(g)
Entropy Change of Reaction: 118.78Jmol-1K-1
Enthalpy Change of Reaction: 44.01kJmol-1
Gibbs Free Energy at 400.0K: -3.502kJmol-1
Will reaction go?: Yes
Temperature: 370.516922041K
ln K: 1.05354993983
&gt;

As you can see, it says the reaction (more a state change in this case, I chose the state change because it is a well known one) will not occur at 300 kelvin, but will at 400 kelvin. As we know, water boils at 373.15 kelvin (100\circ C), so this seems likely. It further predicts that the temperature at which it will finally go is 370.5K - this is slightly below the temperature normally considered to be the boiling point however given the use of average data it is relatively close.

After writing this I decided to calculate the atomic orbital energies. As currently ChemKit uses electronegativites (which are based on the lowest occupied atomic orbital... kind of) it already sort of uses the energies to predict reaction products. Adding a calculator to work them out, however, makes the change more visible. For this I used the following equation;

E = -R_{y}\frac{Z^{2}}{n^2}

Where R_{y} is the Rydberg Constant for eV (13.6eV essentially the energy of a ground state electron in hydrogen), Z is the nuclear charge and $n$ is the principal quantum number. By adjusting the nuclear charge to take into account the electron shielding this produces some relatively accurate numbers;

&gt; element Na
=== Sodium (Na) ===
Atomic Number: 11
Atomic Mass: 22.98977
Electronegativity: 0.9
1s0 (-1646.29eV): 2
2s0 (-275.515eV): 2
2p-1 (-166.67eV): 2
2p0 (-166.67eV): 2
2p1 (-166.67eV): 2
3s0 (-1.512eV): 1

&gt; element F
=== Fluorine (F) ===
Atomic Number: 9
Atomic Mass: 18.9984
Electronegativity: 4.0
1s0 (-1102.062eV): 2
2s0 (-166.67eV): 2
2p-1 (-85.036eV): 2
2p0 (-85.036eV): 2
2p1 (-85.036eV): 1

&gt; element H
=== Hydrogen (H) ===
Atomic Number: 1
Atomic Mass: 1.00794
Electronegativity: 2.1
1s0 (-13.606eV): 1

&gt;

As you can see from this, the highest energy orbital in fluorine is -85.0eV, in sodium it is -1.5eV and in hydrogen it is -13.6eV. This means that as electrons will tend to have a higher probability in an area of lower energy the position in which an electron will have a higher probability in NaH is closer to the Hydrogen, while in HF it is closer to the fluorine - the electron wants to be in a lower energy state.

I am going to continue developing ChemKit as my primary project from now on (it can still be found on my github), and I will hopefully come out with the aforementioned Lorentz Transformation post within the week.

Cherenkov Radiation

Cherenkov radiation is the blue glow that is often seen radiating off of nuclear reactors as shown above. It occurs when a charged particle (in the form of beta radiation) is emitted from a nuclear reactor travelling faster than the speed of light in the medium surrounding it. In the case of a nuclear reactors this medium is generally water.

In water the speed of light is lower than the speed of light in a vacuum - it is 2.25x10^8m/s compared to 3x10^8. This means that if a particle is emitted from the nuclear reactor it is moving faster than the speed of light in water. As these particles generally travel very fast and are polar they are not refracted instantly, and this means that you can end up with particles of beta radiation (which are electrons) travelling faster than the speed of light in the water.  This means that as the electron travels it partially polarises the water and causes a disruption in the electromagnetic field. At lower velocities the field can respond elastically and not much happens, but at such high velocities the disturbance cannot respond quickly, and a shockwave of light builds up.

Cherenkov Radiation is blue because the the majority of the light given out is high energy. This means that it has high frequencies (E=hf) and to our eyes the blue cones are much more receptive to blue light than violet light. This means that the light appears blue to us, and not violet.

Sources;

http://math.ucr.edu/home/baez/physics/Relativity/SpeedOfLight/cherenkov.html

http://math.ucr.edu/home/baez/physics/General/BlueSky/blue_sky.html

 

 

Chemiosmosis

Chemiosmosis is one of the main unifying things between most types of life - it is present in all types of cell - prokaryotic, eukaryotic, and all subdivisions of these. It is, in its most essential form, a way of generating ATP. ATP is the major energy currency in living cells, as it is the thing which powers all cellular activities. For example, the sodium potassium pump which is used to maintain a concentration gradient of sodium and potassium by pumping sodium out of cells and potassium in relies on ATP. Another name for ATP is adenosine triphosphate, and it is formed when a phosphate group is attached onto a molecule of ADP - adenosine diphosphate - and it is this phosphate group which is removed to provide the energy. Due to the unstable nature of this bond ATP is very short lived, and so is not a useful energy storage for long periods of time unlike starch and glucose which are very stable and can last for a long time - like a potato.
This ATP is therefore very useful, and it is generated much like a hydroelectric dam. In a hydroelectric dam, water moves over a turbine and this turns a generator and electricity is produced, while in chemiosmosis protons (H+ ions) move across a semi permeable membrane and turn ATP synthase. ATP synthase is an enzyme on the membrane which is spun by the incoming protons and this spinning leads to the binding of an inorganic phosphate group (PO4) to ADP to produce ATP.
It is believed that this mechanism may suggest an origin of life. In alkaline water vents proton gradients exist naturally and it is not a big jump to assume that it is possible that proto cells made of iron may have formed naturally on these vents which may have been the precursors to modern cells - as these protocells adapted to the surroundings they may have evolved a basic ATP synthase which and pumping mechanisms which allowed them to generate their energy portably, thereby giving the first cell its ability to live separately, and therefore allowing the development of life as we know it.

The Chemistry Project - now ChemSi.

ben@Eddie:~/Development/SI/ChemSi$ python chemi_functions.py
 Welcome to ChemKit (copyright 2015).
 > resultant KI + NaCl
 KI + NaCl -> KCl + NaI
 &gt; set verbose
 Verbose mode on
 > resultant KI + NaCl
 KI + NaCl -> KCl + NaI
 reactants..
 KI: 166.003g/mol
 NaCl: 58.443g/mol
 products..
 KCl: 74.551g/mol; 33.21576375817387%
 NaI: 149.894g/mol; 66.78423624182614%
 > mass C8H18O
 130.228g/mol
 >

Recently I have been working on ChemSi, which is the chemical program I mentioned in my last post. I have since written quite a large amount of code for it to utilise an algorithm I came up with last night at about 1:00AM. It is not the smoothest algorithm, and it does get it wrong about 40% of the time. Regardless, it is relatively good for what is needed here.

In an effort for transparency I will explain how the algorithm works. It is based off of displacement reactions - so for addition and other forms it has some issues. It gets the lowest electronegativity (aka the most nucleophilic element) of the elements present in a structure. It works out how many valencies this has, and finds the highest electronegativity. It repeats this until all valencies are filled - but it does mean the original structures are destroyed (so covalent bonds are treated the same as ionic) - meaning a OH group may be split up, often with disastrous results. A better method might be to do a bunch of algorithms and rank their products on how few lone elements they have (for example with ChemSi entering 'C2H4 + H2O' will get you 'CH3OH + C + H2', when it should all be as a single ethanol molecule). It could then work out the most likely product.
Overall it feels like I am making good progress on it. I am hoping to have a fully working product by next friday (which can do percentage yields). Over the summer holidays I may attempt to add a GUI (sort of like IrYdium VLab, only open source, up to date, and with every reactant).

Talking about IrYdium, I have found that many programs which do these sorts of chemical reaction predictions (ie Chemist on Android/iOS, IrYdium) have a very limited range of chemicals and you feel as though they are preprogrammed results. IrYdium for instance seems to only have chemicals for neutralizations and working out how different things affect the rate. If ChemSi does get to the stage of having a GUI I can assure you it will allow for any chemical mixture, at any temperature (but I cannot say it will always be right ;)).

p53: The Gene that Cracked the Cancer Code

Recently I have been reading the book titled 'p53: The Gene that Cracked the Cancer Code' by Sue Armstrong. I am now reading 'General Chemistry' by Linus Pauling (do not expect a review on this one, it isn't really that sort of book - regardless, it is a very good textbook) and 'The Little Book of String Theory' Steven S. Gubser.

p53: The Gene that Cracked the Cancer Code

p53

Price (as of writing): £16.99 on Amazon

Publisher Synopsis: 

All of us have lurking in our DNA a most remarkable gene, which has a crucial job - it protects us from cancer. Known simply as p53, this gene constantly scans our cells to ensure that they grow and divide without mishap, as part of the routine maintenance of our bodies. If a cell makes a mistake in copying its DNA during the process of division, p53 stops it in its tracks, summoning a repair team before allowing the cell to carry on dividing. If the mistake is irreparable and the rogue cell threatens to grow out of control, p53 commands the cell to commit suicide. Cancer cannot develop unless p53 itself is damaged or prevented from functioning normally.

Perhaps unsurprisingly, p53 is the most studied single gene in history.

This book tells the story of medical science's mission to unravel the mysteries of this crucial gene, and to get to the heart of what happens in our cells when they turn cancerous. Through the personal accounts of key researchers, p53: The Gene that Cracked the Cancer Code reveals the fascination of the quest for scientific understanding, as well as the huge excitement of the chase for new cures - the hype, the enthusiasm, the lost opportunities, the blind alleys, and the thrilling breakthroughs. And as the long-anticipated revolution in cancer treatment tailored to each individual patient's symptoms begins to take off at last, p53 remains at the cutting edge.

This timely tale of scientific discovery highlights the tremendous recent advances made in our understanding of cancer, a disease that affects more than one in three of us at some point in our lives.

Thoughts

This is a very interesting book, of which just the first few pages proposes new questions I had not thought of such as "Why so few?". Generally we assume that there are so many people with cancer that it is very common, but when you look into what actually causes cancer it is amazing how so few people get it in the first place. Billions of cell divisions go on in your body every day, and it is very rare that any of these will turn cancerous. This book explores this idea and the help that p53 provides in preventing cancers from spreading. To put it simply using an analogy this book uses a lot more proficiently than I can, p53 is a checkpoint in the synthesis stage of interphase in the mitotic cycle. It prevents damaged DNA to duplicate.

As well as explaining how p53 works and what it does, it also gives some insight into the researchers in this field and their struggles against large businesses, such as the Tobacco industry when research was published about smoking causing cancer. It almost turns into an espionage novel at that point! Overall this is a very good book, and one I would strongly recommend.