Choose a template
Friday, February 8. 2013
I've now completed the first patch for klibc 2.0.1 for arm64 support - it's completely untested and likely to break things, so it's up for review not usage. Those who are brave enough to look at the patch, please report issues via the github issue tracker.
35 files changed, 509 insertions(+), 19 deletions(-)
21 new C files to replace removed syscalls, most of which simply borrow from glibc and provide a general purpose call (like open) instead of the actual syscall (openat).
Next is to test the build and look into what is necessary to push the patch to the current 2.0.3 upstream.
Friday, January 25. 2013
My 1Tb laptop drive started misbehaving a few weeks ago, just spending a lot of time spinning when it should have been reading frequently changed files (like the browser cache). I was tempted to blame the browser at that point as no other packages appeared affected.
Wednesday night, a routine package upgrade on unstable brought in a bunch of qt4 updates which I wanted and a virtual box update that I didn't see much point in delaying ... 15 minutes later I couldn't work out why the virtualbox dkms task was still running, spotted it spinning in depmod and found some alarming messages in dmesg about short reads and errors reading from the hard drive. Hmm. The hard drive was out of control at this point, it quickly became apparent that no disc access was going to be possible and I couldn't get to a terminal to kill the current tasks, so I killed the power. The fsck which followed reboot showed more of the errors I'd seen in dmesg and fell back to the manual intervention stage. A few hours of confirming attempts to fix the errors, fsck finally finished. A short amount of usage showed that although fsck had finished, the drive was not happy and was starting to give short reads on other parts of the filesystem, resulting in ~40% of the filesystem appearing to be read-only when the rest was read-write. Somehow, I didn't think this was a welcome feature as the areas affected appeared to be quite random.
The drive in question was a replacement for the original 300Gb drive supplied with the ThinkPad, so a quick bit of switching of drives into a caddy and I could rsync my data off the large drive onto the smaller one. The rsync itself took a lot longer than it should have done because it got lots of short reads too. (Principally in /lib/modules/3.2.0-4/ and in the browser cache directories which had been the original symptom as well as most other places where one could have expected processes to have open files when the drive failed).
Now the 1Tb drive was a pig to fit into the Thinkpad originally because it was too big for the bay but I fitted it anyway. Yes, that was probably a mistake. It certainly meant that in order to fit it I had to not put the drive into the useful caddy provided by Lenovo which makes removal of the drive simple. Indeed the drive was wedged into the bay so tightly that it wasn't going to come out with normal levels of persuasion. This probably contributed to the failure of the drive, so live and learn.
With help from Andy Simpkins (it's always handy to have a hardware engineer on hand at times like this), the keyboard was lifted out, the case was dismantled and just enough room was made to get a screwdriver in behind the sata drive and lever it out of the bay. OK, rebuild laptop, put replacement drive into the caddy (because the smaller capacity drive is also a lot smaller in height than the 1Tb and therefore has plenty of clearance between it and the bay) and move on to the software recovery stage.
Hint: if this happens again, before turning off the broken system for the last time, just remember to download a recent Debian ISO to a USB stick - it saves having to ask someone else or find another machine to do the download. (Thanks Andy...)
OK, so after the usual complaints on reboot that there was no operating system, F12 got the boot order menu up and I was in Debian Installer Rescue Mode. Reinstalling grub failed initially for a few reasons:
A few iterations later, I had a working /dev directory inside the /target chroot, bind mounted from the /dev outside the chroot, I was able to mount proc and sys, so grub was finally happy to reinstall itself and then update the initramfs setup for the new drive.
Reboot, another fsck, all appeared well. I was able to login via the terminal but not in X. Hmmm. Stop xdm, startx manually from the terminal, problems with /tmp/ - permission denied. Oops. Yes, it does help to create /tmp with the right permissions....
The final stage was to complete the
So now I'm back on the original drive, albeit temporarily without any swap whatsoever (because I didn't partition the replacement drive to create /dev/sda5 before doing the copy) and now I remember the second reason I wanted to replace the original drive with the 1Tb drive - the original drive is as NOISY as hell. The whole edge of the laptop vibrates constantly, to the point that I can feel the vibration under the keys as I type. It's not that the drive is loose in the bay, it's just a constant vibration.
But, I have kept all my data and I have a usable laptop for the BSP this weekend. I will be looking at an SSD drive to replace this one though and having also found my old Acer laptop with power supply, I can now reference this entry when I transfer the system a second time.
Sunday, January 13. 2013
With significant assistance from Steve McIntyre and some judicious delving into the ARM Information Centre, I've now got the assembly portions of klibc sorted (but untested) for AArch64 (arm64).
Andy & I started by copying the old ARM support as a new directory and one of the final steps was to remove a whole bunch of legacy code from the days before Thumb and all the #ifdef lines which went with it. Some files disappeared entirely. setjmp.S was the largest amount of work as the load&store multiple support of ARMv7 has gone in ARMv8, so the stmia mnemonic had to be expanded to multiple stp calls but that makes it more explicit, so it's not a bad thing.
Steve & I borrowed from the glibc AArch64 code from upstream by Marcus Shawcroft, simplifying the glibc macros for klibc and also got clues about what extra registers needed handling compared to ARMv7.
I've now got to look at some traditional cross-compilation issues because the Linaro AArch64 toolchain doesn't install to typical Debian cross-building paths and the build now moves past the AArch64-specific assembly and fails later when the C code gets the wrong include path and ends up including /usr/include/i386-linux-gnu/asm/byteorder.h with predictable results.
If someone has an AArch64 toolchain already set up, feel free to clone my modified klibc tree and let me know if there are subsequent build errors. Of course, if you fancy testing a build in the Foundation Model for ARMv8, that would be good too! (Report issues via github.)
Whilst I'm sorting out the toolchain, I've also been updating perl-cross-debian (which has also seen some more upstream testing and improvement).
Friday, January 11. 2013
Working with perl upstream, we're getting closer to a fully cross-built upstream perl without needing the external perl installation. The branch (which is also available here with a few of my changes) now builds a host miniperl, cross-builds the rest of perl and almost gets through the rest of the build by using the host miniperl to handle the extensions up as far as XS::Typemap:
Note the change from ./miniperl (which is itself a bug as it should be ./host/miniperl) to ./perl which is, naturally, an armel binary. It's also copied into the local directory, replacing the system perl if I copy it in.
So, more to do, but at least it gets this far.
Improved support for the extensions should also make it easier to clean up the current Debian cross-build diff which is the remaining bit of awkwardness / kludge.
Sunday, January 6. 2013
I'm still working on perl-cross-debian (just uploaded 0.0.2) and there's more to do on that with upstream but part of the reason to work on perl cross-building is to do what I can to help with the ARM64 port.
So, I went back to Wookey's talk at DebConf12 which leads to the current list of cross-build results for arm64 and started through the list.
coreutils is listed as failing but that was an archive error (MD5sum hash mismatch), so that just needs a retry. I don't have access to that buildd, yet, so nothing I can do there.
Next on the list (excluding those just waiting for build-deps) was klibc.
Turns out that this is a build failure I understood, at least initially. A little digging and a trivial patch was begun:
Alongside a trivial change to
OK, then things get a bit more awkward,
Hmm. Assembly, well, yes, I've done assembly before, I know what mov should normally do, sp is likely to be the stack pointer .... where's my AARCH64 assembly PDF again... PRD03-GENC-010197 ...
OK, so maybe the r0 and r1 should be x0 and x1, hmm, that at least doesn't raise assembly errors. So a tentative change:
Next stage, however, leaves me quite a bit more lost:
So now I'm out of my depth of AARCH64 assembly (apart from the recurrence of mov r0 vs mov x0 etc.). If the above is useful then maybe someone can work out what is wrong with setjmp.S or whether AARCH64 just means that klibc needs to gain a arch/arm64/ directory and not try to duplicate each entire assembly block within #ifdef clauses.
I don't really know where else to put an incomplete investigation like this, so it's here for anyone to find.
(Oh, and if you're reading those arm64 cross-build logs, then a few hundred occurrences of
I may try busybox or libusb next. libusb looks like a classic "you might have told me to cross-compile but I'm going to use g++ anyway because I know best" cross-building problem, indicative of yet another BDBS. sigh.
Getting started with 64-bit ARM development
ARMv8 images for developers
AArch64 for everyone, Marcin Juszkiewicz
Howto/HelloAarch64 - Linaro wiki
AArch64 gcc options
Thursday, December 13. 2012
This is the listing for my local cross-build-only repository for perl:
Prior to this, 5.14.2-15 also cross-built.
I've just pushed the update which fixes 5.16 from current Debian experimental.
This means that I'm ready to push perl-cross-debian into experimental via NEW. Whilst the package is in NEW, I will be approaching perl upstream about the necessary changes for Makefile.SH and updating the existing bug reports #285559 and #633884 with the necessary changes for debian/rules.
In the meantime, the changes for Makefile.SH and debian/rules exist within the perl-cross-debian source code - the patch for 5.14.2 is the same as I used for 5.16.2 and I don't see a need, yet, for this to be any different with current perl upstream. Equally, the patch for debian/rules works equally well for 5.14.2 and 5.16.2. All the version-specific (and architecture-specific) data lives in perl-cross-debian.
So it's time to tag perl-cross-debian 0.0.1 and upload to ftp-master as a native package aimed at experimental.
What's left to do? TESTING!
I've only tested with armel using the Emdebian cross-building toolchains from Squeeze using the old dpkg-cross style cross-dependency installation paths. There is outline code for armhf and arm64 but these need testing. The code also needs testing using the latest MultiArch cross-building toolchains. This should be a simple matter of checking if the dpkg-cross style paths exist and looking for MultiArch if not.
Right now, all of this is "worksforme" grade. It needs others to have a go and file bugs. Until the package is through NEW, feel free to use the issue tracker on the perl-cross-debian github site.
Please read through the documentation in the source code and the manpages in the package (xml in the source code) and tell me if some of it isn't clear.
Sunday, November 25. 2012
Long term maintenance of cross-build support for the Debian configuration of perl has now gained some code at github and an ITP: #694326 for Debian.
There's some working code for perl 5.14 and initial work on 5.16 (which isn't complete yet).
This will dramatically simplify the patch for #633884 and provide a base for getting another part of that patch into upstream (Makefile.SH). (Thanks to Steve McIntyre & Peter Pearse for the body of the patch itself.)
So the config*variant files will live in /usr/share/perl-cross-debian/$arch/$perl_version/
Sunday, November 18. 2012
After prompts from Wookey and Steve McIntyre, I decided to look at #285559 and #633884 for perl cross-build support and then port that support forward to the current perl in Wheezy and on to the version of perl currently in experimental. The first patch is for perl 5.8, the second for perl 5.12, neither of which is available currently in Debian. snapshot.debian.org provided the 5.12 source but then that no longer cross-builds with the patch.
The problem, as with any cross build, is that the build must avoid trying to execute binaries compiled within the build to achieve the test results required by ./configure (or in the case of perl, Configure). dpkg-cross has one collection of cache values but the intention was always to migrate the package-specific support data into the packages themselves and keep the architecture-specific data in dpkg-cross or dpkg-dev. Therefore, the approach taken in #633884 would be correct, if only there was a way of ensuring that the cached values remain in sync with the relevant Debian package.
I'll note here that I am aware of other ways of cross-building perl, this is particularly concerned with cross-building the Debian configuration of perl as a Debian package and using Debian or Emdebian cross-compilers. After all, the objective is to support bootstrapping Debian onto new architectures. However, I fully expect this to be just as usable with Ubuntu packages of perl compiled with, e.g. Linaro cross-compilers but I haven't yet looked at the differences between perl in Debian vs Ubuntu in any detail.
I've just got perl 5.14.2 cross-building for armel using the Emdebian gcc-4.4 cross-compiler (4.4.5-8) on a Debian sid amd64 machine without errors (it needs testing, which I'll look at later), so now is the time to document how it is done and what needs to be fixed. I've already discussed part of this with the current perl maintainers in Debian and, subject to just how the update mechanism works, have outline approval for pushing these changes into the Debian package and working with upstream where appropriate. The cache data itself might live in a separate source package which will use a strict dependency on perl to ensure that it remains in sync with the version which it can cross-build. Alternatively, if I can correctly partition the cache data between architecture-specific (and therefore generated from the existing files) and package_$version specific, then it may be possible to push a much smaller patch into the Debian perl package. This would start with some common data, calculate the arch-specific data and then look for some version-specific data, gleaned from Debian porter boxes whilst the version is in Debian experimental.
The key point is that I've offered to provide this support for the long term, ensuring that we don't end up with future stable releases of Debian having a perl package which cannot be cross-built. (To achieve that, we will also end up with versions of perl in Debian testing which also cross-build.)
This cross-build is still using dpkg-cross paths, not MultiArch paths, and this will need to be changed eventually. (e.g. by the source package providing two binaries, one which uses MultiArch and one which expects dpkg-cross paths.) The changes include patches for the upstream Makefile.SH, debian/rules and the cache data itself. Depending on where the cache data finally lives, the new support might or might not use the upstream Cross/ directory as the current contents date from the Zaurus support and don't appear to be that useful for current versions of perl.
The cache data itself has several problems:
That last point is important because it means that the cache data is not useful upstream as a block. It also means that generating the cache data for a specific Debian package means running the generation code on the native architecture with all of the Debian build-dependencies installed for the full perl build. This is going to complicate the use of this method for new architectures like arm64.
My objective for the long term maintenance of this code is to create sufficient data that a new architecture can be bootstrapped by judicious use of some form of template. Quite how that works out, only time will tell. I expect that this will involve isolating the data which is truly architecture-specific which doesn't change between perl versions from the data which is related to the tests for build-dependencies which does change between perl versions and then work out how to deal with any remainder. A new architecture for a specific perl version should then just be a case of populating the arch-specific data such as the size of a pointer/char and the format specifiers for long long etc. alongside the existing (and correct) data for the current version of perl.
Generating the cache data natively
The perl build repeats twice (three builds in total) and each build provides and requires slightly different cache data - static, debug and shared. Therefore, the maintenance code will need to provide a script which can run the correct configuration step for each mode, copy out the cache data for each one and clean up. The script will need to run inside a buildd chroot on a porter box (I'm looking at using abel.debian.org and harris.debian.org for this work so far) so that the derived data matches what the corresponding Debian native build would use. The data then needs slight modification - typically to replace the absolute paths with PERL_BUILD_DIR. It may also be necessary to change the value of cc, ranlib and other compiler-related values to the relevant cross-compiler executables. That should be possible to arrange within the build of the cache data support package itself, allowing new cache files to be dropped in directly from the porter box.
The configuration step may need to be optimised within debian/rules of perl itself as it currently proceeds on from the bare configuration to do some actual building but I need to compare the data to see if a bare config is modified later. The test step can be omitted already. Each step is performed as:
That is repeated for perl.debug and libperl.so.$(VERSION) where $VERSION comes from :
The files to be copied out are:
There is a lot of scope for templating of some form here, e.g. config.h.debug is 4,686 lines long but only 41 of those lines differ between amd64 and armhf for the same version of perl (and some of those can be identified from existing architecture-specific constants) which should make for a much smaller patch.
Architecture-specific cache data for perl
So far, the existing patches only deal with armel and armhf. If I compare the differences between armel & armhf, it comes down to:
However, comparing armel and armhf doesn't provide sufficient info for deriving arm64 or mips etc. Comparing the same versions for armhf and amd64 shows the range of differences more clearly. Typical architecture differences exist like the size of a long, flags to denote if the compiler can cast negative floats to 32bit ints and the sprintf format specifier strings for handling floats and doubles. The data also includes some less expected ones like:
I'm not at all sure why that is arch-specific - if anyone knows, email codehelp @ d.o - same address if anyone fancies helping out ....
Cross-builds and debclean
When playing with the cross-build, remember to use the cross-build clean support, not just debclean:
That wasted quite a bit of my time initially with having to blow away the entire tree, unpack it from original apt sources and repatch it. (Once Wheezy is out, may actually investigate getting debclean to support the -a switch).
OK, that's an introduction, I'm planning on pushing the cross-build support code onto github soon-ish and doing some testing of the cross-built perl binaries in a chroot on an armel box. I'll detail that in another blog post when it's available.
Next step is to look at perl 5.16 and then current perl upstream git to see how to get Makefile.SH fixed for the long term.
« previous page (Page 1 of 31, totaling 245 entries) next page »