james=# ALTER USER servernotes PASSWORD ‘servernotes’;
ALTER ROLE
james=# alter role servernotes login;
ALTER ROLE
james=# REVOKE CONNECT ON DATABASE servernotes FROM PUBLIC;
REVOKE
james=# grant connect on database servernotes to servernotes;
james=# ALTER USER servernotes PASSWORD ‘servernotes’;
ALTER ROLE
james=# alter role servernotes login;
ALTER ROLE
james=# REVOKE CONNECT ON DATABASE servernotes FROM PUBLIC;
REVOKE
james=# grant connect on database servernotes to servernotes;
Factory Girl is powerful as heck, but it seems like some of the errors for has_many associations just come up as an error from FactoryGirl’s syntax runner. There are a bunch of different causes that come up from a few google searches, but here’s one that I hadn’t seen answered anywhere else. This could be caused by ruby 2.1, or by using the FactoryGirl beta 3.0.0, I’m not too sure.
I have an association:
to_do_item belongs_to to_do_list
to_do_list has_many to_do_items
I wanted a to_do_list factory that contained several to_do_items. You set this up with the after(:create)
callback. For more information, see this excellent blog post from thoughtbot. My factory read like this:
FactoryGirl.define do factory :to_do_list do end factory :three_item_list, parent: :to_do_list do after(:create) do |list| to_do_items << FactoryGirl.create(:to_do_item, {content: "item one", to_do_list: list, created_at: (DateTime.now - 1.hour)}) to_do_items << FactoryGirl.create(:to_do_item, {content: "item two", to_do_list: list, created_at: (DateTime.now - 1.day)}) to_do_items << FactoryGirl.create(:to_do_item, {content: "item three", to_do_list: list, created_at: (DateTime.now)}) end end end
The factory itself passes a .valid? call, but trying to use it threw the SyntaxRunner exception. After reading and re-reading the FactoryGirl readme a bunch, on a whim I tried out:
FactoryGirl.define do factory :to_do_list do end factory :three_item_list, parent: :to_do_list do after(:create) do |list| list.to_do_items << FactoryGirl.create(:to_do_item, {content: "item one", to_do_list: list, created_at: (DateTime.now - 1.hour)}) list.to_do_items << FactoryGirl.create(:to_do_item, {content: "item two", to_do_list: list, created_at: (DateTime.now - 1.day)}) list.to_do_items << FactoryGirl.create(:to_do_item, {content: "item three", to_do_list: list, created_at: (DateTime.now)}) end end end
and my specs stopped throwing a SyntaxRunner exception and started failing as I’d expect them to. So there you go, it might be necessary in a has_many association to explicitly name the receiving object.
I’ve normally been a mysql user, and know just enough admin stuff with mysql to get rails started up. Today, I taught mysql just enough postgres admin stuff to get up and running with postgres. Using homebrew complicates the process slightly.
Here’s what you expect from reading stackoverflow:
brew install postgresql
psql -U postgres
This won’t work. Neither will any solution like:
sudo su - postgres psql
The reason is that a lot of people are writing stack overflow from linux. Most linux package managers perform the following set of operations to install postgres:
The documentation for postgres tells you that You will need to become the operating system user under which PostgreSQL was installed (usually postgres) to create the first user account in order to access the default admin account.
Homebrew installs run as the a normal user account, by default. My default account name on my mac os box is james. As such, the default postgres admin login is james:
psql -U james
and that gets me logged in. The prompt shows a # sign when you’re logged in as the superuser role:
james$ psql
psql (9.3.4)
Type "help" for help.
james=#
====
What do you need to do now to get postgres set up for rails? Let’s assume you’re create a rails app called unimportant. Here’s what you need to do:
rails new unimportant --database=postgresql
cd unimportant
cat config/database.yml
Note that in database.yml, you’re specifically told that the database names you need to set up are:
development:
<<: *default
database: unimportant_development
test:
<<: *default
database: unimportant_test
production:
<<: *default
database: unimportant_production
username: unimportant
password:
So you need to make 3 new databases, and a user named “unimportant” who can access the production database. Assuming you’re running everything locally, here’s what you need to know. Postgres comes with some command line tools for making databases a little easier; it’s equivalent to writing the sql yourself:
createdb unimportant_development
createdb unimportant_test
createdb unimportant_production
Postgres also comes with a tool for making new users. We need an unimportant user:
createuser unimportant
You’re not done yet, though, because the unimportant user doesn’t have permission to write to the unimportant production database. First, let’s look at what postgres sees as permissions for our users. From the postgres pdf, chapter 20:
The concept of roles subsumes the concepts of “users” and “groups”. In PostgreSQL versions before 8.1, users and groups were distinct kinds of entities, but now there are only roles. Any role can act as a user, a group, or both.
So we’re actually looking for how to view roles. From the same chapter: To determine the set of existing roles, examine the pg_roles system catalog, for example SELECT rolname FROM pg_roles;
The psql program’s \du meta-command is also useful for listing the existing roles.
Let’s try it out:
james$ psql
psql (9.3.4)
Type "help" for help.
james=# \q
greentreeredsky:unimportant james$ psql -U james
psql (9.3.4)
Type "help" for help.
james=# \du
List of roles
Role name | Attributes | Member of
-------------+------------------------------------------------+-----------
james | Superuser, Create role, Create DB, Replication | {}
unimportant | Create DB | {}
james=# SELECT rolname FROM pg_roles;
rolname
-------------
james
unimportant
(2 rows)
james=# SELECT * FROM pg_roles;
rolname | rolsuper | rolinherit | rolcreaterole | rolcreatedb | rolcatupdate | rolcanlogin | rolreplication | rolconnlimit | rolpassword | rolvaliduntil | rolconfig | oid
-------------+----------+------------+---------------+-------------+--------------+-------------+----------------+--------------+-------------+---------------+-----------+-------
james | t | t | t | t | t | t | t | -1 | ******** | | | 10
unimportant | f | t | f | f | f | f | f | -1 | ******** | | | 16389
(2 rows)
This is a bit difficult to read, sorry, but the last piece is telling us all the permissions that the admin user and the unimportant user have. In order for the unimportant user to actually be capable of managing the rails production db, it needs the rolcreatedb permission. To be able to log in to the psql command console you’ll need the rolcanlogin permission too. The postgresql documentation tells us to use the ALTER sql command to grant this, rather than the mysql GRANT command. The docs for ALTER tell us what roles we can add.
james=# alter role unimportant login;
james=# alter role unimportant createdb;
All done!
Let’s say you want to use getopts to parse ARGV in bash. Let’s say you also want a non-getopts argument. For example:
./foo.sh argument
./foo.sh -x argument
./foo.sh -abx argument
The non-optional argument is here always given as the last argument.
So, how do you retrieve that last argument after getopts is finished doing its work? Check it out, yo:
while getopts "ghr" opt; do
case $opt in
h)
help_message
;;
g)
GIT_DEPLOY=true
;;
r)
RSYNC_DEPLOY=true
;;
\?)
echo "Invalid option handed in. Usage:"
help_message
;;
esac
done
OPTIONS=${@:$OPTIND}
That last line is the magic. getopts sets $OPTIND to be the index of the last valid option. Index bash’s ARGV, fondly known as $@, in the position of $OPTIND and you’ll get the last item from ARGV.
I just spent a wonderful day tackling a single problem. I wanted the rbenv install plugin to update itself. I decided that was a single instance of a broader problem, having rbenv plugins update automagically.
If I wanted a rbenv plugin to update automagically, though, surely I also wanted rbenv to update itself? Surely! That makes sense, after all!
The end result is three github repositories. First, a tools repository that contains a couple of scripts I wrote to help me write rbenv plugins. It is here https://github.com/jamandbees/rbenvdevtools.
Second, a repository with a rbenv plugin that does a self update. https://github.com/jamandbees/rbenv-selfupdate
Third, a repository with a rbenv plugin that updates rbenv plugins. https://github.com/jamandbees/rbenv-plugins
Okay, so it turned out that reading the rake documentation was helpful as heckfire.
require File.expand_path('../config/application', __FILE__)
Rake is apparently simple, used all over the place, and a bit fiddly. Whenever you run a rake task, the entire rails application is initialised in the background. But how? HOW?!
Well, there’s no specific mechanism in rake that I’m aware of that arbitrary tasks inherit from. So it’s either the case that there’s a mechanism I don’t know about, or it’s the case that the rake tool has been monkeypatched somehow.
The case for rake being monkeypatched seems promising. How does rake -vT know to look in lib/tasks for extra tasks? Rails configuration, that’s how. So, how do you configure rake this way? I want to start by working out where a standard rake task is defined, so I try:
rake --where db:migrate
From here, I can see that railties is responsible for the db tasks. I also try:
rake --where log:clear
and it’s clear again that railties is coming into play. I’ve got a vague idea from the name, and my own reading, that railties is some kind of glue for rails. So, let’s dig a little deeper. I’ve opened my text editor to the location of the railties gem (it’s in the path given out by rake –where log:clear), and there’s a readme. It confirms that railsties is a glue gem.
I search for all files beginning with Rake, just because I’ve got to start somewhere, and find a rakefile in lib/rails/generators/rails/app/templates/ which has the line <%= app_const %>.load_tasks. This reads like an erb template, and I’m not specifically interested save that it gives me a clue: presumably, tasks are discovered by rails using a load_tasks method. Searching for “def load_tasks” in railties reveals:
# Load the application and its railties tasks and invoke the registered hooks.
# Check Rails::Railtie.rake_tasks for more info.
def load_tasks(app=self)
initialize_tasks
super
self
end
I can buy that initialize_tasks is going to be where I need to start. I search for def initialize_tasks and find:
def initialize_tasks #:nodoc:
self.class.rake_tasks do
require "rails/tasks"
task :environment do
$rails_rake_task = true
require_environment!
end
end
end
require_environment!? What does that sound like it does?
def require_environment! #:nodoc:
environment = paths["config/environment"].existent.first
require environment if environment
end
The namespace we’re in right now is “class Application < Engine”.
—
I can buy that this is where the environment is getting pulled in for _something_. I don’t know whether it’s the current app I’m working on, some rails-y magic or whatever. However, this doesn’t specifically help me out right now, because I’m trying to prevent rails loading for a rake task. rake task:name will still need to know how to load the rakefile in order to make any progress, which requires that the rails app has loaded. I’ve got a little further on the path to accomplishing my goal, but I’m not quite there.
—
Of course, I could just be overcomplicating things and there’s simply a Rakefile in rails root. And this slurps in the application config. Curses!
There seems to be a lot of confusion about this. The tl;dr summary is that this rails commit claims that the reason this is hard “stems from the fact that subdomain is defined in ActionDispatch:Request and the test session uses Rack::Request”. The solution was to “Extend assert_recognizes and assert_generates to support passing full urls as the path argument. This allows testing of routing constraints such as subdomain and host within functional tests.”
So your code should read as:
describe "GET /index_exists" do it "works! (now write some real specs)" do get "http://subdomain.domain.com" response.status.should be(200) end end
—
So that’s a solution. Let’s talk about the problems. If you search for rails integration test subdomains, the first link is for a stackoverflow question and answer that suggests the following:
def setup host! "my.host" end
When I try this, I get:
No route matches [GET] “<no hostname>/path”
—
I encountered something in rails today that I haven’t seen before. It was clearly a ruby feature, so I wrote up an example in ruby. It’s an attempt to access a private instance method from both a public class method and a public instance method:
You have a class:
class SimpleClass
def self.stuff
"Private method returns #{private_stuff}"
end
def things
"Private method returns #{private_stuff}"
end
private
def private_stuff
"content"
end
end
puts SimpleClass.new.things
puts SimpleClass.new.stuff
I expected that in both cases, I’d get the string out:
Private method returns content
However, it turns out these are different. The instance variable version returns the output as expected. The class method?
undefined local variable or method `private_stuff' for SimpleClass:Class (NameError)
So, class methods cannot access private instance methods.
The rule in general is that a private method cannot have an explicit receiver. I am left wondering if a class method has an explicit receiver, and I’m just not seeing it.
Edit: hah! I was wrong in my supposition about an explicit receiver. The issue is that class methods are defined in the eigenclass, which is a step in the inheritance hierarchy above the current class and as such cannot access private methods in the current class.
This brings up the richer point: can a public instance method be accessed from within a class method? I’m betting on no.
I spent a good portion of today looking at integration tests.
In the normal course of things, a website undergoes integration tests from an external source; in my career it’s something like selenium for external testing of a website, and faraday for testing a RESTful API.
I’ve been thinking a lot, though, about where selenium fits into the test cycle. There’s a kind of test that I’ve seen selenium used for a lot, but is not its strong point, and that’s testing simple workflows: does /blogpost/new contain a title string, or does /index contain a sign_in link? Selenium _can_ do these things, but I think its real strong suit is complex scenarios: can I log in, then create a blog post, then assign the post to a calendar date, then make sure that the calendar picker doesn’t let me assign two posts to a date, and so on. Lots of steps, each of which takes you further into the application.
Simple stuff like checking if the front page is actually present can absolutely be covered by selenium, but it seems like I see two problems:
I wanted to solve both of these, and in rails it looks like the built in integration test stuff really solves both in a neat way. It sits there, between the unit test stuff and the full blown complex testing scenarios, providing a simple to maintain, integrated set of tests that nonetheless can be very useful.
Here’s a few downright useful tests I put together with a hand from a couple of guys with more UI experience than I have. Start by generating the integration test:
rails generate integration_test blogs
Now, here’s a neat set of simple tests:
class BlogTest < ActionDispatch::IntegrationTest test "browse index" do get "/" assert_response :success assert_select "h1" end test "browse new page" do get "/blogs/new" assert_response :success assert_select "input" end test "Find some specific field" do get "/blogs/new" assert_response :success assert_select "div.field" end test "Find some specific text area" do get "/blogs/new" assert_response :success assert_select "textarea[name='blog[body]']" end end
If you’ve already installed capybara, the syntax is a little different but not bad:
describe "GET /blogs" do it "works! (now write some real specs)" do # Run the generator again with the --webrat flag if you want to use webrat methods/matchers visit new_blog_path fill_in "Title", with: "Jamandbees' awesome blogness" fill_in "Body", with: "Jamandbees writes about sadness" click_button "Create Blog" page.should have_content "Blog was successfully created." page.should have_content "blogness" end end
That’s literally all there is to some very basic integration tests in rails.
I’m a professional QA resource; my idea about QA is that a QA person should know and understand the stack they’re working with as well as the developers understand it, and be able to comment effectively and, yes, write code in the same language the developers are working in. As a QA person working in rails, I have known the full stack back to back in a basic way, sufficient to be able to sit down, read code with developers and comment upon it with them. If you can’t code review the codebase, there are entire stacks and oodles of bugs you cannot find.
I gave a presentation today about integration testing in rails and got clearance to start writing some of the basic integration tests that will improve our codebase. I can write these, have them integrated in the build and be confident that my (frankly, excellent) team of colleagues in development will be able to maintain them. I think that when an application is still young and in flux, having the QA person write tests and developers easily maintain them is a good balance for the team.