Categories: geek » development

RSS - Atom - Subscribe via email

Switching back to Linux as my development host

Posted: - Modified: | development, geek, work

I switched back to using my Ubuntu partition as my primary development environment instead of using Windows 7. I still use a virtual machine to isolate development-related configuration from the rest of my system.

Linux makes better use of my computer memory. I have 4 GB of RAM on this laptop. My 32-bit Windows 7 can only access 3 GB of it, a limit I regularly run into. The resulting swapping slows down my development enough to be noticeable. I could switch to 64-bit Windows, but reinstalling is a disruption I don’t want to deal with right now. On Linux, my processes can access up to 4GB of memory each, which means there’s even room for future expansion. I’m at just the right level now – using 3.9 GB, but not swapping out.

Using Linux also means that it’s easy for me to edit files in my virtual machine. Instead of setting up Samba + Eclipse, I can use ssh -X to connect to my virtual machine and run Emacs graphically. If I want to use Eclipse for step-by-step debugging, I can use sshfs, smbfs, or NFS to mount the files.

The key things I liked about Microsoft Windows 7 were Autodesk Sketchbook Pro and Microsoft Onenote. I can draw a bit using the GIMP or Inkscape, although I really need to figure out my smoothing settings or whatever it is that would make drawing as fun as it is in those other programs. I don’t need those programs when I’m focused on development, though, and it’s easy enough to reboot if I want to switch.

Hibernate doesn’t quite work, but I’ve been suspending the computer or shutting it down, and that works fine. Pretty cool!

Managing configuration changes in Drupal

| development, drupal, geek, work

One of our clients asked if we had any tips for documenting and managing Drupal configuration, modules, versions, settings, and so on. She wrote, “It’s getting difficult to keep track of what we’ve changed, when, for that reason, and what settings are in that need to be moved to production versus what settings are there for testing purposes.” Here’s what works for us.

Version control: A good distributed version control system is key. This allows you to save and log versions of your source code, merge changes from multiple developers, review differences, and roll back to a specified version. I use Git whenever I can because it allows much more flexibility in managing changes. I like the way it makes it easy to branch code, too, so I can start working on something experimental without interfering with the rest of the code.

Issue tracking: Use a structured issue-tracking or trouble-ticketing system to manage your to-dos. That way, you can see the status of different items, refer to specific issues in your version control log entries, and make sure that nothing gets forgotten. Better yet, set up an issue tracker that’s integrated with your version control system, so you can see the changes that are associated with an issue. I’ve started using Redmine, but there are plenty of options. Find one that works well with the way your team works.

Local development environments and an integration server: Developers should be able to experiment and test locally before they share their changes, and they shouldn’t have to deal with interference from other people’s changes. They should also be able to refer to a common integration server that will be used as the basis for production code.

I typically set up a local development environment using a Linux-based virtual machine so that I can isolate all the items for a specific project. When I’m happy with the changes I’ve made to my local environment, I convert them to code (see Features below) and commit the changes to the source code repository. Then I update the integration server with the new code and confrm that my changes work there. I periodically load other developers’ changes and a backup of the integration server database into my local environment, so that I’m sure I’m working with the latest copy.

Database backups: I use Backup and Migrate for automatic trimmed-down backups of the integration server database. These are regularly committed to the version control repository so that we can load the changes in our local development environment or go back to a specific point in time.

Turning configuration into code: You can use the Features module to convert most Drupal configuration changes into code that you can commit to your version control repository.

There are some quirks to watch out for:

  • Features aren’t automatically enabled, so you may want to have one overall feature that depends on any sub-features you create. If you are using Features to manage the configuration of a site and you don’t care about breaking Features into smaller reusable components, you might consider putting all of your changes into one big Feature.
  • Variables are under the somewhat unintuitively named category of Strongarm.
  • Features doesn’t handle deletion of fields well, so delete fields directly on the integration server.
  • Some changes are not exportable, such as nodequeue. Make those changes directly on the integration server.

You want your integration server to be at the default state for all features. On your local system, make the changes you want, then create or update features to encapsulate those changes. Commit the features to your version control repository. You can check if you’ve captured all the changes by reverting your database to the server copy and verifying your functionality (make a manual backup of your local database first!). When you’re happy with the changes, push the changes to the integration server.

Using Features with your local development environment should minimize the number of changes you need to directly make on the server.

Documenting specific versions or module sources: You can use Drush Make to document the specific versions or sources you use for your Drupal modules.

Testing: In development, there are few things as frustrating as finding you’ve broken something that was working before. Save yourself lots of time and hassle by investing in automated tests. You can use Simpletest to test Drupal sites, and you can also use external testing tools such as Selenium. Tests can help you quickly find and compare working and non-working versions of your code so that you can figure out what went wrong.

What are your practices and tips?

2011-06-09 Thu 12:25

Thinking about our development practices

| development, geek, kaizen

We’re gearing up for another Drupal project. This one is going to be interesting in terms of workflow. I’m working with the clients, an IBM information architect, a design firm, another IBM developer, and a development firm. Fortunately, the project manager (Lisa Imbleau) has plenty of experience coordinating these inter-company projects.

I feel a little nervous about the project because there are a lot of things to be clarified and there’s a bit of time pressure. I’m sure that once we get into the swing of things, though, it’ll be wonderful.

I’m used to working with other developers within IBM, and I’m glad I picked up a lot of good practices from the people I’ve had the pleasure to work with over the years. I’m looking forward to learning even more from the people I get to work with this time around.

In particular, I’m looking forward to:

  • learning from how Lisa manages the project, clarifies requirements, and coordinates with other companies
  • learning from the other developers about what works and doesn’t work for them
  • planning more iteratively and getting more testing cycles in
  • implementing continuous integration testing using Hudson and Simpletest
  • getting even deeper in Drupal: Views, Notifications, maybe Organic Groups
  • using a git-integrated issue tracker such as Redmine
  • … while knowing when to just use pre-built modules, of course

It’s also a good opportunity to figure out which of our practices are new to others, and to write about those practices and improve them further. Some things that have turned up as different:

  • We organize our Drupal modules into subdirectories of sites/all/modules/: features, custom, contrib, and patched.
  • I use Simpletest a lot, and would love to help other people with it or some other automated testing tool.

Much learning ahead!

VMWare, Samba, Eclipse, and XDebug: Mixing a virtual Linux environment with a Microsoft Windows development environment

Posted: - Modified: | development, drupal, geek

I’m starting the second phase of a Drupal development project, which means I get to write about all sorts of geeky things again. Hooray! So I’m investing some time into improving my environment set-up, and taking notes along the way.

This time, I’m going to try developing code in Eclipse instead of Emacs, although I’ll dip into Emacs occasionally if I need to do anything involving keyboard macros or custom automation. Setting up a good Eclipse environment will help me use XDebug for line-by-line debugging. var_dump> can only take me so far, and I still haven’t figured out how to properly use XDebug under Emacs. Configuring Eclipse will also help me help my coworkers, who tend to not be big Emacs fans. (Sigh.)

So here’s my current setup:

  • A Linux server environment in VMWare, so that I can use all the Unix tools I like and so that I don’t have to fuss about with a WAMP stack
  • Samba for sharing the source code between the Linux VM image and my Microsoft Windows laptop
  • XDebug for debugging
  • Eclipse and PDT for development

I like this because it allows me to edit files in Microsoft Windows or in Linux, and I can use step-by-step debugging instead of relying on var_dump.

Setting up Samba

Samba allows you to share folders on the network. Edit your smb.conf (mine’s in /etc/samba/) and uncomment/edit the following lines:

security = user

[homes]
   comment = Home Directories
   browseable = no
   read only = no
   valid users = %S

You may also need to use smbpasswd to set the user’s password.

Xdebug

Install php5-xdebug or whatever the Xdebug package is for PHP on your system. Edit xdebug.ini (mine’s in /etc/php5/conf.d) and add the following lines to the end:

[Xdebug]
xdebug.remote_enable=on
xdebug.remote_port=9000
xdebug.remote_handler=dbgp
xdebug.remote_autostart=1
xdebug.remote_connect_back=1

Warning: this allows debugging access from any computer that connects to it. Use this only on your development image. If you want to limit debugging access to a specific computer, remove the line that refers to remote_connect_back and replace it with this:

xdebug.remote_host=YOUR.IP.ADDRESS.HERE

Eclipse and PDT

I downloaded the all-in-one PHP Development Toolkit (PDT) from http://www.eclipse.org/pdt/, unpacked it, and imported my project. After struggling with Javascript and HTML validation, I ended up disabling most of those warnings. Then I set up a debug configuration that used Xdebug and the server in the VM image, and voila! Line by line debugging with the ability to look in variables. Hooray!

2011-05-31 Tue 17:37

Rails: Preserving test data

Posted: - Modified: | development, geek, rails

I’m using Cucumber for testing my Rails project. The standard practice for automated testing in Rails is to make each test case completely self-contained and wipe out the test data after running the test. The test system accomplishes this by wrapping the operations in a transaction and rolling that transaction back at the end of the test. This is great, except when you’re developing code and you want to poke around the test environment to see what’s going on outside the handful of error messages you might get from a failed test.

I set up my test environment so that data stays in place after a test is run, and I modified my tests to delete data they need deleted. This is what I set in my features/support/env.rb:

Cucumber::Rails::World.use_transactional_fixtures = false

I also removed database_cleaner.

You can set this behaviour on a case-by-case basis with the tag @no-txn.

Running the tests individually with bundle exec cucumber ... now works. I still have to figure out why the database gets dropped when I do rake cucumber, though…

2011-04-24 Sun 16:21

Rails: Paperclip needs attributes defined by attr_accessible, not just attr_accessor

Posted: - Modified: | development, geek, rails

I wanted to add uploaded files to the survey response model defined by the Surveyor gem. I’d gotten most of the changes right, and the filenames were showing up in the model, but Paperclip wasn’t saving the files to the filesystem. As it turns out, Paperclip requires that your attributes (ex: :file_value> for my file column) be tagged with attr_accessible, not just attr_accessor.

Once you define one attr_accessible item, you need to define all the ones you need, or mass-assigning attributes with update_attributes will fail. This meant adding a whole bunch of attributes to my attr_accessor list, too.

If you’re using accepts_nested_attributes_for, you will also need to use attr_accessible there, too.

Sharing the note here just in case anyone else runs into it. Props to Tam on StackOverflow for the tip!

2011-04-01 Fri 12:41

Setting up Ruby on Rails on a Redhat Enterprise Linux Rackspace Cloud Server

| development, geek, rails, ruby, work

1. Compile Ruby from source.

First, install all the libraries you’ll need to compile Ruby.

yum install gcc zlib libxml2-devel 
yum install gcc
yum install zlib
yum install zlib-devel
yum install openssl
yum install openssl-devel

My particular application has problems with Ruby 1.9.2, so I compiled Ruby 1.8.7 instead. This can be downloaded from ftp://ftp.ruby-lang.org/pub/ruby/1.8/ruby-1.8.7-p174.tar.gz

Unpack the source code for Ruby. Configure and install it with:

./configure
make
make install

Add /usr/local/bin to the beginning of your PATH.

2. Install Ruby Gems.

Downloadcd the latest Ruby Gems package and unpack it. I got mine from http://production.cf.rubygems.org/rubygems/rubygems-1.7.1.tgz . Change to the directory and run:

ruby setup.rb

3. Install Rails and rake

gem install rails rake

If all goes well, you should now have Rails and rake.

Troubleshooting:

*builder-2.1.2 has an invalid value for @cert_chain*

Downgrade Rubygems to version 1.6.2 with the following command.

gem update --system 1.6.2

(Stack Overflow)

sqlite3-ruby only supports sqlite3 versions 3.6.16+, please upgrade!

Compile sqlite from source:

wget http://www.sqlite.org/sqlite-amalgamation-3.7.0.1.tar.gz
tar zxvf sqlite-amalgamation-3.7.0.1.tar.gz
cd sqlite-amalgamation-3.7.0.1
./configure
make
make install
gem install sqlite3

LoadError: no such file to load – openssl

  1. Install openssl and openssl-devel.
    yum install openssl openssl-devel
    
  2. Go to your Ruby source directory and run the following commands:
    cd ext/openssl
    ruby extconf.rb
    make
    make install
    

LoadError: no such file to load – readline

yum install readline-devel

Change to your Ruby source directory and run the following:

cd ext/readline
ruby extconf.rb
make
make install

(Code snippets)

You can’t access port 80 from another computer.

Port 80 (the web server port) is blocked by default on Redhat Enterprise Linux 5.5. Edit /etc/sysconfig/iptables to allow it, adding a line like:

-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT

Make sure you put it above the REJECT all line.

Load your changes with

/etc/init.d/iptables restart

(Cyberciti)

2011-04-04 Mon 11:06