Legacy Boot Ubuntu 21.04

Did my regular Ubuntu install with a 1 MB unformatted partition at the beginning of the hard drive with a GPT partition table. The 1 MB partition should have the flag bios_grub.

After the regular install, I ran


grub-install --target=i386-pc /dev/vda

Then I boot up in both secure UEFI mode and a legacy boot using the MBR format.

Upgrading Ubuntu, but what about my old pacakges?

I’ve upgraded Ubuntu a lot over the years, and often times I’ll start off with a fresh new install, but I really want a list of the old applications I used. Well, ask ubuntu had the answer for me.

https://askubuntu.com/questions/2389/generating-list-of-manually-installed-packages-and-querying-individual-packages


comm -23 <(apt-mark showmanual | sort -u) <(gzip -dc /var/log/installer/initial-status.gz | sed -n 's/^Package: //p' | sort -u)

Now I can sort through the old package installs to see if I need or want all of my old packages.

Legacy Boot Ubuntu 20.10

Did my regular Ubuntu install with a 1 MB unformatted partition at the beginning of the hard drive with a GPT partition table. The 1 MB partition should have the flag bios_grub.

After the regular install, I ran


grub-install --target=i386-pc /dev/vda

Then I boot up in both secure UEFI mode and a legacy boot using the MBR format.

Ubuntu 17.10 Secure Boot and Legacy Boot install

Every since Secure Boot was released, I’ve been wary of it. Secure Boot can mean that my freedom on platforms has been removed. Why does this bother me?

Let me try and look at it from a different perspective for a minute. I’m a manufacture. I make smart phones/fridges/watches/whatever. I’m making the software and I don’t want people to mess with it. Best way to prevent that is a system like Secure Boot. Of course, my product doesn’t do as well as I want it to on the market, and upper management has decided it isn’t worth the investment. Losses are cut, and the product line is dropped. Support, it is a thing of the past.

As a user this is very irritating. It is my device. I paid for it. It is mine. I should be able to use it for whatever I want. For some reason, our society has deemed it acceptable that what I buy, isn’t really mine. It is owned by some corporation somewhere else. It is like I have “Big Brother” telling me what I can and cannot do with my device. I mean really, most people just want it to run Doom.

However, I can also see the benefit of locking systems down in a University or other public place. So there is a place for locking down the BIOS and configuring Secure Boot.

With that in mind, I would like to be able to have a USB install of Ubuntu be able to run in both a Secure Boot and Legacy boot mode.

I spent a bunch of time on Google, and I couldn’t come up with the right terms, or a good guide on doing this. It seems that most individuals aren’t interested in a “dual boot” option. Eventually I stumbled upon the following post, Partitioning hard disk drives for BIOS-MBR, BIOS-GPT, and UEFI-GPT in Linux. It is a good read, and I would recommend it.

The basics. I simply needed a 1 MB as my first partition of a GPT partitioned disk. I put the label of bios_grub on it. Then you need your standard “EFI System Partition” and other Linux partitions.

Typically I’ll do the install in KVM, which by default doesn’t boot with UEFI. This will do the legacy grub install.

Now I don’t recall if this next part is required, but it is what I did. I booted with the Ubuntu install media using OVMF in KVM to get the boot to use UEFI. I then do the appropriate mount –bind statements, and chroot into my install. I ensure the efi and boot volume is mounted.

I then install in Ubuntu the signed efi grub package.


apt install grub-efi-amd64-signed shim-signed
grub-install --uefi-secure-boot /dev/vda
apt install grub-pc git grub-pc-bin

After grub is installed for the UEFI, I re-install the grub-pc, since in Ubuntu the packages preclude the other.

I hope this helps someone else out there.

Uploading Custom Images to Amazon EC2

To upload your image to Amazon EC2, you need to ensure that your image is in the raw format. You can do that by create the image in the raw format to start with, or you can covert it at a later time. For example, to convert a Virtual Box Image to a raw format, you can run the following command.

VBoxManage internalcommands converttoraw ec2.vdi ec2.img

We compress the image to avoid uploading a bunch of zero-ed disk.

gzip -9 ec2.img

My image is 8 GB uncompressed and 334 MB compressed.

We create a standard Amazon instance, upload the image. We then attach an EBS volume the appropriate size for our image. We then extract the new image onto the EBS volume.

gzip -c -d ec2.img.gz | dd of=/dev/xvdf bs=10M

Detach the image and create a snapshot.

We then register our image with a user provider kernel. See Custom Kernel Docs.

Here is an example command for North Virginia

ec2-register -a x86_64 -n "CentOS 6.5" -d "CentOS 6.5 x86_64 minimal install \
Provided by me" --root-device-name /dev/sda -b /dev/sda="snap-b9db2b10":8:true -b /dev/sdf=ephemeral0 --kernel aki-b4aa75dd -K somekey.pem -C somecert.pem

Or using the newer ami command

aws ec2 register-image --architecture x86_64 --name "CentOS 6.5" --description "CentOS 6.5 x86_64 minimal install \
Provided by me" --root-device-name /dev/sda --block-device-mappings "[{\"DeviceName\": \"/dev/sda\", \"Ebs\": {\"SnapshotId\": \"snap-b9db2b10\", \"VolumeSize\":8}},{\"DeviceName\": \"/dev/sdf\",\"VirtualName\":\"ephemeral0\"}]" --kernel-id aki-b4aa75dd

Building Custom Amazon Images

Working with Amazon EC2 Images, it has been helpful to build custom images, so they match the same images that are installed in customer environments. I also find it help to be able to run the same Operating System that is running in dev and production environments.

CentOS and several other Linux Operating system will work with the default Kernel. Building a custom image is pretty easy to do, but there are a few things you need to do to get your system to work on Amazon EC2 instances. Here are the steps we took to build this image.

1) When doing the install select the correct partitioning method. I’ve found that LVM doesn’t work with the boot system that Amazon is using for custom Kernels. I also found it helpful to have the root partition also contain grub. Here is my partitioning:

/ <- 8 GB <-- FIRST PARTITION!!! Swap <- 1 GB 2) I then did a minimal Linux install. 3) After configuring the root password, I setup the basics on the machine, and add the ec2-user. dhclient eth0 adduser ec2-user yum update yum install system-config-firewall-tui system-config-network-tui setuptool vim screen openssh-clients rsync wget man acpid ntp chkconfig acpid on chkconfig ntpd on setup # enable SSH Firewall # configure network for dhcp 3) Configuring the network is important to get right, otherwise our machine will not be accessible to other machines on the network. I configure the network using DHCP, and then remove the Hardware address line from the configuration, so on-boot the new network devices will get the default configuration. vim /etc/sysconfig/networking/devices/ifcfg-eth0 #remove mac address line #on boot yes #NM Controlled no 4) I setup the ec2-ami tools and cloud-init tools that are provided for CentOS using the Epel Repository. I use the cloud-init tools to get the keys installed on to the server at boot time. yum install http://s3.amazonaws.com/ec2-downloads/ec2-ami-tools.noarch.rpm yum install epel (http://linux.mirrors.es.net/fedora-epel/6/i386/repoview/epel-release.html) yum install cloud-init vim /etc/cloud/cloud.cfg

chpasswd:
 list: |
  root:RANDOM
  ec2-user:RANDOM
 expire: false

5) I wanted to keep SELinux running on the system, and I found that I needed to restore some linux contexts to the files the cloud-init was creating on boot. This allows the SSH keys to work with cloud-init, and we get to keep SELinux on.

#Work around selinux problem and persistent net
vim /etc/rc.d/rc.local
#add the following line
rm -f /etc/udev/rules.d/70-persistent-net.rules
restorecon -r /home/*/.ssh

# ensure menu.lst symbolic links to grub.conf

yum install sudo

#as root
vim /etc/sudoers.d/cloud-init
#add line
ec2-user ALL = NOPASSWD: ALL

chmod a-rwx /etc/sudoers.d/cloud-init
chmod ug+r /etc/sudoers.d/cloud-init

#test sudo access from ec2-user

6) We get rid of any existing keys in the environment so the image can be shared with others.

#delete keys
find / -name “authorized_keys” –exec rm –f {} \;

#Delete third party keys CVS, SVN, GIT, ETC

exit
login

#delete shell history
find /root/.*history /home/*/.*history -exec rm -f {} \;

#fill disk with zeros
cat /dev/zero > /tmp/zerofill
sleep 1
sync
rm -f /tmp/zerofill
sleep 1
sync

shutdown -h -P now

MSP430 Temperature Logger – Part 4

Using the temperature monitor in the Incubator, I’ve run into a problem.

Temperature Monitor In Incubator

Temperature Monitor In Incubator

The problem is that the temperature seems to be fluctuating more than I’ve expected. Since we haven’t noticed any problems with the other thermometers, I’m going to assume that the problem is with my code. Perhaps it has something to do with my oversampling code? Do I need to average the samples better? Well, let us take a look at some data. Lets first look at the offending data here.

Wave Data

Wave Data

As you can see, it looks like the data is forming a wave. Is this something cause by the incubator itself. Perhaps noise on the ADC line? Lets look at some more data.

Missing Wave

Missing Wave

This is what the data looks like when the temperature logger is sitting around the house. Notice how it is missing the wave effect.

The following is roughly another 9 days with recording data at fifteen minute intervals. Notice how we still have the wave going on through out the data.

9 Days in Incubator

9 Days in Incubator

Well, after a lot of logging, it seems that we still have a problem. Now I decided, lets record more data, but let us record extra data points. So now instead of just storing the over sampled value, we will also store the average of the over sampled value, the non over sampled value, and the average of non over sampled value. The averages will be from four sequential readings. Here are the results from storing them once every fifteen minutes(so far all the temperature recordings have been taken at 15 minute intervals in hopes of storing an entire incubation period worth of data).

More Data!

More Data!

This new set of data seems to rule out problems with the oversampling; however, it does show that using averaging appears to provide some better results. This still leaves the question, are we still just getting ADC noise? To rule out getting a group of bad readings, let us try recording the temperature every minute and see what that looks like.

Every Minute

Every Minute

This appears to rule out getting noise on the ADC line; however, it does leave a few questions unresolved. Why do the other thermometers not report the same type of behavior? Is this just an problem of observing the analog thermometers at the right time? Is this a sensitivity issue? Perhaps the MSP430 calibration is off?

MSP430 Temperature Logger – Part 3

When running the data logger, I found that I ended up with a big hole in the data that I’ve been storing to flash.

Data Logging Hole

Data Logging Hole

The program is supposed to find the next available memory location for writing.


{
int i;

for (i = 0; i < INTS_TOTAL; i ++) { if (*(INFOE + i) == 0xffff) break; } if (i > INTS_TOTAL)
{
i = 0;
flash_erase(INFOE);
}
next_memory = i;
}

So what is going wrong?

I’ve come up with two possible theories as to the failure. The first idea is that the MSP430 is under powered when attempting to write to flash. The second idea is that the oscillator has not yet stabilized when writing to flash.

Using a voltmeter, I was able to determine that the MSP430 battery voltage was running low when this data was being collected; but why the hole in the data? This could be temperature related. As the temperature drops in the household, this would cause the battery voltage to drop as well. Is that enough to cause the hole? I’m not sure.

Possible solutions?

1) Move the logging event to occur later, as this may allow for the oscillator to stabilize before attempting to write to flash.

2) Ensure that the battery has the proper voltage. This could be done by checking battery voltage manually, or making the MSP430 use the built-in ADC to monitor the battery.

MSP430 Temperature Logger

My family has started incubating eggs, and it has been a real struggle to keep the temperature in the right range. Perhaps it is the incubator, perhaps user error. At any rate, I wanted to lend a hand with my technical know how, so I have begun working on a project. The basic concept is that I want to create a temperature logger to store data for the entire period of incubation, and also an alarm or buzzer for when the temperature is out of bounds. Here are the results for three days of incubating.

Three Days of Logging

Three Days of Logging

The temperature is in F multiplied by 100. Basically 98.00 degrees equals 9800.

For the project I’m using the MSP430 with factory calibrated temperatures. The ADC in the MSP430 is a 10 bit ADC, but is only sensitive to about 0.5 degrees C. To increase the sensitivity, I’m using oversampling to get about 13 bits of accuracy. I then use Integer math to convert the temp to F with two decimal places. I check the temperature every 10 seconds and alarm if the temp is to high or too low.

Right now the data logger is storing temperature at about 15 minute intervals. I need to add a crystal so that I can measure this more precisely, as the times right now are approximate.

The buzzer / alarm is proving a bit problematic for me. Right now, I have a 70 dB buzzer, but it seems to quiet.

The other problem I have is battery life. I’m currently using one or two CR2032s( http://en.wikipedia.org/wiki/CR2032_battery ). When I’m using one battery should last two or maybe three incubations. When I’m using two batteries I need to regulate the voltage, but the current voltage regulator I’m using uses too much power, so battery life is estimated to be roughly 30 hours. The two battery approach may be needed to keep the buzzer well powered. I’m looking at a new LDO voltage regulator, and that should take care of the problem.

I’ll put some pictures up of my temperature logger as I progress. Right now it is in the prototype stages on a breadboard.

Kim