image builder - css cleanup, collapsible tree
by Jan Provazník
Sending again this patchset (older version is not applicable because of
some fixes in patch "image builder - package selection and javascript")
[PATCH aggregator 1/2]: I tried to cleanup Image Builder CSS
[PATCH aggregator 2/2]: List of selected packages on "Software" page is now collapsible in JS version (non-JS version still displays expanded list)
(this patchset depends on previous Image Builder patches)
13 years, 10 months
[PATCH] First cut at Condor to implement jobs
by Ian Main
This patch is a first try at using condor as a job management system.
This removes the usage of the 'taskomatic' utilities and replaces them
with 'condormatic' calls that use the command line interfaces (no qmf or
gsoap etc) to condor.
On startup of the server (and any changes after running), a pile of
'classads' are created which define each possible startup location for
a given set of image/hardware profiles that exist and are useable as
well as the backend info condor needs to start an instance on the given
provider.
For each instance that you start, a job will be created in condor.
Condor will then match the hardware profile and image to a provider and
can then start an instance on that provider. When you stop or destroy
that instance, the job will be removed (which isn't really how we want
it to go but..).
This patch requires that you have our custom hacked up condor installed.
You can get this at:
http://people.redhat.com/clalance/condor-dcloud
Be sure to read the README. Chris has written up very good instructions
on how to set up condor.
In general everything here basically works. There are however several
known bugs and deficiencies:
- To 'stop' a job in condor we should be using 'hold' instead of
removing the job. This is creating a few different problems.
- After stopping an instance the condor job is removed but the instance
continues to exist in deltacloud. On a subsequent 'start' the start
fails.
- I'm only matching on image and hardware profiles, not realms and
I'm ignoring quotas too.
- We are still reaching directly to the DeltaCloud API to get a list of
available actions for each instance. Maybe this is fine, I'm not
sure.
- Classads are sync'd to condor on startup and on any changes to the
hardware profile and image records. However, if you restart condor
you won't have any classads in it to match against and your jobs will
fail.
- We're still using 'on-demand' syncing of states from condor to the
aggregator. eg when you list the instances it updates the states of
each instance from condor at that time. There is no event logging.
- There's no 'reboot' as yet in condor. Not sure how we'll deal with
that just yet.
- We've kept the tasks model and usage but they are quazi-meaningless.
The task table needs to turn into an event/audit log table.
Many of these problems have fixes in-progress or will be addressed in
future patches.
Signed-off-by: Ian Main <imain(a)redhat.com>
---
src/app/controllers/instance_controller.rb | 17 +-
src/app/controllers/pool_controller.rb | 5 +-
src/app/models/hardware_profile_observer.rb | 9 +
src/app/models/image_observer.rb | 9 +
src/app/util/condormatic.rb | 232 +++++++++++++++++++++
src/config/environment.rb | 2 +-
src/config/initializers/condor_classads_sync.rb | 8 +
src/db/migrate/20090804142049_create_instances.rb | 1 +
8 files changed, 275 insertions(+), 8 deletions(-)
create mode 100644 src/app/models/hardware_profile_observer.rb
create mode 100644 src/app/models/image_observer.rb
create mode 100644 src/app/util/condormatic.rb
create mode 100644 src/config/initializers/condor_classads_sync.rb
diff --git a/src/app/controllers/instance_controller.rb b/src/app/controllers/instance_controller.rb
index 039ed3a..5664ec5 100644
--- a/src/app/controllers/instance_controller.rb
+++ b/src/app/controllers/instance_controller.rb
@@ -19,7 +19,7 @@
# Filters added to this controller apply to all controllers in the application.
# Likewise, all the methods added will be available for all controllers.
-require 'util/taskomatic'
+require 'util/condormatic'
class InstanceController < ApplicationController
before_filter :require_user
@@ -96,8 +96,7 @@ class InstanceController < ApplicationController
:task_target => @instance,
:action => InstanceTask::ACTION_CREATE})
if @task.save
- task_impl = Taskomatic.new(@task,logger)
- task_impl.instance_create
+ condormatic_instance_create(@task)
flash[:notice] = "Instance added."
redirect_to :controller => "pool", :action => 'show', :id => @instance.pool_id
else
@@ -124,8 +123,16 @@ class InstanceController < ApplicationController
raise ActionError.new("#{action} cannot be performed on this instance.")
end
- task_impl = Taskomatic.new(@task,logger)
- task_impl.send "instance_#{action}"
+ case action
+ when 'stop'
+ condormatic_instance_stop(@task)
+ when 'destroy'
+ condormatic_instance_destroy(@task)
+ when 'start'
+ condormatic_instance_create(@task)
+ else
+ raise ActionError.new("Sorry, action '#{action}' is currently not supported by condor backend.")
+ end
alert = "#{(a)instance.name}: #{action} was successfully queued."
flash[:notice] = alert
diff --git a/src/app/controllers/pool_controller.rb b/src/app/controllers/pool_controller.rb
index e687c0b..9d53862 100644
--- a/src/app/controllers/pool_controller.rb
+++ b/src/app/controllers/pool_controller.rb
@@ -20,6 +20,7 @@
# Likewise, all the methods added will be available for all controllers.
require 'util/taskomatic'
+require 'util/condormatic'
class PoolController < ApplicationController
before_filter :require_user
@@ -36,8 +37,8 @@ class PoolController < ApplicationController
#FIXME: clean this up, many error cases here
@pool = Pool.find(params[:id])
require_privilege(Privilege::INSTANCE_VIEW,@pool)
- # pass nil into Taskomatic as we're not working off a task here
- Taskomatic.new(nil,logger).pool_refresh(@pool)
+ # Go to condor and sync the database to the real instance states
+ condormatic_instances_sync_states
@pool.reload
@instances = @pool.instances
end
diff --git a/src/app/models/hardware_profile_observer.rb b/src/app/models/hardware_profile_observer.rb
new file mode 100644
index 0000000..c924bdb
--- /dev/null
+++ b/src/app/models/hardware_profile_observer.rb
@@ -0,0 +1,9 @@
+class HardwareProfileObserver < ActiveRecord::Observer
+
+ def after_save(hwp)
+ condormatic_classads_sync
+ end
+end
+
+HardwareProfileObserver.instance
+
diff --git a/src/app/models/image_observer.rb b/src/app/models/image_observer.rb
new file mode 100644
index 0000000..68a5b85
--- /dev/null
+++ b/src/app/models/image_observer.rb
@@ -0,0 +1,9 @@
+class ImageObserver < ActiveRecord::Observer
+
+ def after_save(image)
+ condormatic_classads_sync
+ end
+end
+
+ImageObserver.instance
+
diff --git a/src/app/util/condormatic.rb b/src/app/util/condormatic.rb
new file mode 100644
index 0000000..7ec6e01
--- /dev/null
+++ b/src/app/util/condormatic.rb
@@ -0,0 +1,232 @@
+#
+# Copyright (C) 2010 Red Hat, Inc.
+# Written by Ian Main <imain(a)redhat.com>
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; version 2 of the License.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
+# MA 02110-1301, USA. A copy of the GNU General Public License is
+# also available at http://www.gnu.org/copyleft/gpl.html.
+
+def condormatic_instance_create(task)
+
+ begin
+ instance = task.instance
+ # FIXME: We should be using the realm name and matching it in condor.
+ realm = instance.realm.external_key rescue nil
+
+ job_name = "job_#{instance.name}_#{instance.id}"
+
+
+ # I use the 2>&1 to get stderr and stdout together because popen3 does not support
+ # the ability to get the exit value of the command in ruby 1.8.
+ pipe = IO.popen("condor_submit 2>&1", "w+")
+ pipe.puts "universe = grid\n"
+ Rails.logger.info "universe = grid\n"
+ pipe.puts "executable = #{job_name}\n"
+ Rails.logger.info "executable = #{job_name}\n"
+ pipe.puts "grid_resource = dcloud $$(provider_url) $$(username) $$(password) $$(image_key) #{instance.name} NULL $$(hardwareprofile_key)\n"
+ Rails.logger.info "grid_resource = dcloud $$(provider_url) $$(username) $$(password) $$(image_key) #{instance.name} NULL $$(hardwareprofile_key)\n"
+ pipe.puts "log = #{job_name}.log\n"
+ Rails.logger.info "log = #{job_name}.log\n"
+ pipe.puts "requirements = hardwareprofile == \"#{instance.hardware_profile.id}\" && image == \"#{instance.image.id}\"\n"
+ Rails.logger.info "requirements = hardwareprofile == \"#{instance.hardware_profile.id}\" && image == \"#{instance.image.id}\"\n"
+ pipe.puts "notification = never\n"
+ Rails.logger.info "notification = never\n"
+ pipe.puts "queue\n"
+ Rails.logger.info "queue\n"
+ pipe.close_write
+ out = pipe.read
+ pipe.close
+
+ Rails.logger.info "$? (return value?) is #{$?}"
+ raise ("Error calling condor_submit: #{out}") if $? != 0
+
+ instance.condor_job_id = job_name
+ instance.save!
+
+ rescue Exception => ex
+ task.state = Task::STATE_FAILED
+ Rails.logger.error ex.message
+ Rails.logger.error ex.backtrace
+ else
+ # FIXME: We're kinda lying here.. we don't know the state for the task but I don't think that matters so much
+ # as we are just going to use the 'task' table as a kind of audit log.
+ task.state = Task::STATE_PENDING
+ end
+ task.instance.save!
+end
+
+# JobStatus for condor jobs:
+#
+# 0 Unexpanded U
+# 1 Idle I
+# 2 Running R
+# 3 Removed X
+# 4 Completed C
+# 5 Held H
+# 6 Submission_err E
+#
+
+def condor_to_instance_state(state_val)
+ case state_val
+ when '0'
+ return Instance::STATE_PENDING
+ when '1'
+ return Instance::STATE_PENDING
+ when '2'
+ return Instance::STATE_RUNNING
+ when '3'
+ return Instance::STATE_STOPPED
+ when '4'
+ return Instance::STATE_STOPPED
+ when '5'
+ return Instance::STATE_CREATE_FAILED
+ when '6'
+ return Instance::STATE_CREATE_FAILED
+ else
+ return Instance::STATE_PENDING
+ end
+end
+
+def condormatic_instances_sync_states
+
+ begin
+ # I'm not going to do the 2&>1 trick here since we are parsing the output
+ # and I'm afraid we'll get a warning or something on stderr and it'll mess
+ # up the xml parsing.
+ pipe = IO.popen("condor_q -xml")
+ xml = pipe.read
+ pipe.close
+
+ raise ("Error calling condor_q -xml") if $? != 0
+
+ # Set them all to 'stopped' because if they aren't in the condor
+ # queue as jobs then they are not running, pending or anything else.
+ instances = Instance.find(:all)
+ instances.each do |instance|
+ instance.state = Instance::STATE_STOPPED
+ instance.save!
+ end
+
+ def find_value_int(job_ele, attrib)
+ if job_ele.attributes['n'] == attrib
+ cmd = job_ele.elements.each('i') do |i|
+ return i.text
+ end
+ end
+ return nil
+ end
+
+ def find_value_str(job_ele, attrib)
+ if job_ele.attributes['n'] == attrib
+ cmd = job_ele.elements.each('s') do |s|
+ return s.text
+ end
+ end
+ return nil
+ end
+
+ doc = REXML::Document.new(xml)
+ doc.elements.each('classads/c') do |jobs_ele|
+ job_name = nil
+ job_state = nil
+
+ jobs_ele.elements.each('a') do |job_ele|
+ value = find_value_str(job_ele, 'Cmd')
+ job_name = value if value != nil
+ value = find_value_int(job_ele, 'JobStatus')
+ job_state = value if value != nil
+ end
+
+ Rails.logger.info "job name is #{job_name}"
+ Rails.logger.info "job state is #{job_state}"
+
+ instance = Instance.find(:first, :conditions => {:condor_job_id => job_name})
+
+ if instance
+ instance.state = condor_to_instance_state(job_state)
+ instance.save!
+ Rails.logger.info "Instance state updated to #{condor_to_instance_state(job_state)}"
+ end
+ end
+ rescue Exception => ex
+ Rails.logger.error ex.message
+ Rails.logger.error ex.backtrace
+ end
+end
+
+def condormatic_instance_stop(task)
+ instance = task.instance
+
+ Rails.logger.info("calling condor_rm -constraint 'Cmd == \"#{instance.condor_job_id}\"' 2>&1")
+ pipe = IO.popen("condor_rm -constraint 'Cmd == \"#{instance.condor_job_id}\"' 2>&1")
+ out = pipe.read
+ pipe.close
+
+ Rails.logger.info("condor_rm return status is #{$?}")
+ Rails.logger.error("Error calling condor_rm (exit code #{$?}) on job: #{out}") if $? != 0
+end
+
+def condormatic_instance_destroy(task)
+ instance = task.instance
+
+ Rails.logger.info("calling condor_rm -constraint 'Cmd == \"#{instance.condor_job_id}\"' 2>&1")
+ pipe = IO.popen("condor_rm -constraint 'Cmd == \"#{instance.condor_job_id}\"' 2>&1")
+ out = pipe.read
+ pipe.close
+
+ Rails.logger.info("condor_rm return status is #{$?}")
+ Rails.logger.error("Error calling condor_rm (exit code #{$?}) on job: #{out}") if $? != 0
+end
+
+
+def condormatic_classads_sync
+
+ index = 0
+ providers = Provider.find(:all)
+ Rails.logger.info "Syncing classads.."
+
+ providers.each do |provider|
+ provider.cloud_accounts.each do |account|
+ provider.images.each do |image|
+ provider.hardware_profiles.each do |hwp|
+ pipe = IO.popen("condor_advertise UPDATE_STARTD_AD 2>&1", "w+")
+
+ pipe.puts "Name=\"provider_combination_#{index}\""
+ pipe.puts 'MyType="Machine"'
+ pipe.puts 'Requirements=true'
+ pipe.puts "\n# Stuff needed to match:"
+ pipe.puts "hardwareprofile=\"#{hwp.aggregator_hardware_profiles[0].id}\""
+ pipe.puts "image=\"#{image.aggregator_images[0].id}\""
+ pipe.puts "\n# Backend info to complete this job:"
+ pipe.puts "image_key=\"#{image.external_key}\""
+ pipe.puts "hardwareprofile_key=\"#{hwp.external_key}\""
+ pipe.puts "provider_url=\"#{account.provider.url}\""
+ pipe.puts "username=\"#{account.username}\""
+ pipe.puts "password=\"#{account.password}\""
+ pipe.close_write
+
+ out = pipe.read
+ pipe.close
+
+ Rails.logger.error "Unable to submit condor classad: #{out}" if $? != 0
+
+ index += 1
+ end
+ end
+ end
+
+ Rails.logger.info "done"
+ end
+end
+
diff --git a/src/config/environment.rb b/src/config/environment.rb
index 919a710..eb11f17 100644
--- a/src/config/environment.rb
+++ b/src/config/environment.rb
@@ -50,7 +50,7 @@ Rails::Initializer.run do |config|
config.gem "gnuplot"
config.gem "scruffy"
- config.active_record.observers = :instance_observer, :task_observer
+ config.active_record.observers = :instance_observer, :task_observer, :hardware_profile_observer, :image_observer
# Only load the plugins named here, in the order given. By default, all plugins
# in vendor/plugins are loaded in alphabetical order.
# :all can be used as a placeholder for all plugins not explicitly named
diff --git a/src/config/initializers/condor_classads_sync.rb b/src/config/initializers/condor_classads_sync.rb
new file mode 100644
index 0000000..9165f75
--- /dev/null
+++ b/src/config/initializers/condor_classads_sync.rb
@@ -0,0 +1,8 @@
+require 'util/condormatic'
+
+puts "Syncing condor classads.."
+# This pulls all the possible classad matches from the database and puts
+# them on condor on startup.
+condormatic_classads_sync
+puts "Done."
+
diff --git a/src/db/migrate/20090804142049_create_instances.rb b/src/db/migrate/20090804142049_create_instances.rb
index 335b93f..42706e1 100644
--- a/src/db/migrate/20090804142049_create_instances.rb
+++ b/src/db/migrate/20090804142049_create_instances.rb
@@ -32,6 +32,7 @@ class CreateInstances < ActiveRecord::Migration
t.string :public_address
t.string :private_address
t.string :state
+ t.string :condor_job_id
t.integer :lock_version, :default => 0
t.integer :acc_pending_time, :default => 0
t.integer :acc_running_time, :default => 0
--
1.7.0.1
13 years, 10 months
Paper on FISL approved!
by Eduardo Otubo
Hello all,
I'm not very active on this project, but I've been keeping an eye on it
since the very beginning on last year. I've been studying and reading
all the email and patches here in the list because it is a very
interesting topic for me. Unfortunately I had no time to contribute
with actual code. But now I think I have a great way to contribute to
the project, I submitted a paper to the FISL and it has been accepted.
For those who do not know, FISL[1] is the Brazilian International Forum
of Free Software (the acronym is in pt_BR). It hits the 11th edition
this year and it's going to happen from 21st to 24th of July in Porto
Alegre.
I asked some folks on the IRC channel a couple of month ago if someone
was interested to come and talk about the project but no one could
come. So I volunteered myself to prepare a lecture to this so big event
here in Brazil.
The thing is, my lecture was accepted but don't know if IBM will be
able to fund my trip. But I don't want to wait the answer to start
working on the slides and the content of the lecture, need to get it
done as soon as possible to make a great work for us all.
My idea is to talk about (not on this order):
* How DeltaCloud works / macro view / cases
* Internals
The audience will be very technical, but some business guys will be
around. That's why I put that high level information in the list.
So, I am opening this topic for comments and ideas. Any kind of help
will be very welcome :)
Thanks,
[1] - http://softwarelivre.org/fisl11
ps.: For those who use to be in the IRC, I am 'otubo' both in #virt @
OFTC and #deltacloud @ Freenode.
--
Eduardo Otubo
Software Engineer
Linux Technology Center
IBM Systems & Technology Group
Mobile: +55 19 8135 0885
eotubo(a)linux.vnet.ibm.com
13 years, 10 months
Trying to get cucumber tests working on fedora 13
by jason.guiditta
[PATCH aggregator] Update cucumber to use 0.8.3
This patch updates some of the cucumber config for 0.8.3.
However, I think the real problem may lie in the rpm for either
Rails 2.3.5, or ActiveSupport. So this can be tested even without
this patch (worked for me). Yum remove the rails and activesupport
rpms - this will get rid of a bunch of others as well. Then do a
'sudo gem install rails -v2.3.5' you will also need to gem install
will_paginate and gettext-rails, though I think that is it. I am
trying to verify on someone else's system that the rpm version of
these things fails, but the gem version works, then we can try to
track down the details enough to file a bug and get the rpm fixed.
-j
13 years, 10 months
Re: [deltacloud-devel] Trying to get cucumber tests working on fedora 13
by Tomas Sedovic
Hey Jason,
Seems like you're dead on.
After I removed the rubygem-rails and rubygem-activerecord packages and installed them as regular gems, the cucumber results were same as they are on Fedora 12:
..................F-----......................................................
(::) failed steps (::)
No such factory: hardware_profile_auto (ArgumentError)
./features/step_definitions/pool_steps.rb:13:in `__instance_exec0'
./features/step_definitions/pool_steps.rb:12:in `each'
./features/step_definitions/pool_steps.rb:12:in `/^the Pool has the following Hardware Profiles:$/'
features/pool.feature:25:in `And the Pool has the following Hardware Profiles:'
Failing Scenarios:
cucumber features/pool.feature:23 # Scenario: View Pool's Hardware Profiles
9 scenarios (1 failed, 8 passed)
76 steps (1 failed, 5 skipped, 70 passed)
Thanks for looking into this.
Thomas
----- Original Message -----
From: "Jason Guiditta" <jason.guiditta(a)gmail.com>
To: deltacloud-devel(a)lists.fedorahosted.org
Sent: Tuesday, June 29, 2010 11:18:44 PM GMT +01:00 Amsterdam / Berlin / Bern / Rome / Stockholm / Vienna
Subject: [deltacloud-devel] Trying to get cucumber tests working on fedora 13
[PATCH aggregator] Update cucumber to use 0.8.3
This patch updates some of the cucumber config for 0.8.3.
However, I think the real problem may lie in the rpm for either
Rails 2.3.5, or ActiveSupport. So this can be tested even without
this patch (worked for me). Yum remove the rails and activesupport
rpms - this will get rid of a bunch of others as well. Then do a
'sudo gem install rails -v2.3.5' you will also need to gem install
will_paginate and gettext-rails, though I think that is it. I am
trying to verify on someone else's system that the rpm version of
these things fails, but the gem version works, then we can try to
track down the details enough to file a bug and get the rpm fixed.
-j
_______________________________________________
deltacloud-devel mailing list
deltacloud-devel(a)lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/deltacloud-devel
13 years, 10 months
Deltacloud Client Rework (rev 4)
by Michal Fojtik
Hi,
This patchset delivers a new fresh client code. It's a from scratch
rewrite and add some nice new features:
- Ruby Client documentation generation (http://mifo.sk/dc-doc)
- All RSpec tests now pass
- Implemented 'plain formatter' for command line client
- Autolearning, which add new elements from API automatically
- It's a single Ruby file
Please let me know if it's working for you. For generating documentation
locally you can use:
"rake documentation"
(please be sure you have 'yard' (gem install yard) installed and
deltacloud API running on http://mockuser:mockpassword@localhost:3001/api).
PS:
As part of this rewrite, I break API backward compatibility.
We discussed it previously, I removed 'ID' element from all objects and
add it as an attribute.
But you don't need to worry, new client code use this new scheme.
Sadly, if you are using API in Python/Java, you will need update your code
according this change.
-- Michal
13 years, 10 months
[PATCH aggregator] Updated Pool Cucumber Feature to support new hwp
by Martyn Taylor
From: martyntaylor <mtaylor(a)redhat.com>
---
src/features/pool.feature | 14 +++++++-------
src/features/step_definitions/pool_steps.rb | 11 ++++++++++-
2 files changed, 17 insertions(+), 8 deletions(-)
diff --git a/src/features/pool.feature b/src/features/pool.feature
index 22a7b33..9e27499 100644
--- a/src/features/pool.feature
+++ b/src/features/pool.feature
@@ -23,18 +23,18 @@ Feature: Manage Pools
Scenario: View Pool's Hardware Profiles
Given I own a pool named "mockpool"
And the Pool has the following Hardware Profiles:
- | name | memory | storage | architecture |
- | m1-small | 1.7 | 160.0 | i386 |
- | m1-large | 7.5 | 850.0 | x86_64 |
- | m1-xlarge | 15.0 | 1690.0 | x86_64 |
+ | name | memory | cpu |storage | architecture |
+ | m1-small | 1740 | 2 | 160.0 | i386 |
+ | m1-large | 4096 | 4 | 850.0 | x86_64 |
+ | m1-xlarge | 8192 | 8 | 1690.0 | x86_64 |
And I am on the homepage
When I follow "mockpool"
Then I should be on the show pool page
When I follow "Hardware Profiles"
Then I should see the following:
- | m1-small | 1.7 | 160.0 | i386 |
- | m1-large | 7.5 | 850.0 | x86_64 |
- | m1-xlarge | 15.0 | 1690.0 | x86_64 |
+ | m1-small | 1740 | 2 | 160.0 | i386 |
+ | m1-large | 4096 | 4 | 850.0 | x86_64 |
+ | m1-xlarge | 8192 | 8 | 1690.0 | x86_64 |
Scenario: View Pool's Realms
Given I own a pool named "mockpool"
diff --git a/src/features/step_definitions/pool_steps.rb b/src/features/step_definitions/pool_steps.rb
index c58fe73..af0cba1 100644
--- a/src/features/step_definitions/pool_steps.rb
+++ b/src/features/step_definitions/pool_steps.rb
@@ -10,7 +10,16 @@ end
Given /^the Pool has the following Hardware Profiles:$/ do |table|
table.hashes.each do |hash|
- @pool.hardware_profiles << Factory(:hardware_profile_auto, hash)
+ memory = Factory(:mock_hwp1_memory, :value => hash[:memory])
+ storage = Factory(:mock_hwp1_storage, :value => hash[:storage])
+ cpu = Factory(:mock_hwp1_cpu, :value => hash[:cpu])
+ arch = Factory(:mock_hwp1_arch, :value => hash[:architecture])
+ @pool.hardware_profiles << Factory(:mock_hwp1,
+ :name => hash[:name],
+ :memory => memory,
+ :cpu => cpu,
+ :storage => storage,
+ :architecture => arch)
end
end
--
1.6.6.1
13 years, 10 months