Git Workflow for aeolus components
by steve linabery
Hi aeolus community.
Apologies for the back-and-forth on git workflow. I think, based on the responses in the previous thread on this topic, that we have consensus around moving (back!) to what jayg was calling 'Option 1'.
Restating what jayg proposed in his previous thread on this topic:
1. All development happens on master
2. If there is a bugfix for a previous release that would go into a maintenance release, that gets cherry-picked off master and put on the release branch.
3. At the end of a sprint or set of sprints, another 'major' release branch is created off of master with its own set of bugfix releases.
This approach, like the other approach, carries with it its own set of challenges, but I think it will be the least error-prone and least onerous for developers.
If there is any lingering confusion, please irc eggs (me), eck, or jayg.
Thank you!
Steve Linabery
12 years, 3 months
[PATCH RFC/draft: conductor 0/1] - #791195 - First step towards fixing instance deletion
by Matt Wagner
This is some preliminary code I wrote, more focused on cleanup while I tried to reproduce the process of stopping instances on RHEV through this code. It is not tested and probably not complete; I'd welcome a look, but please don't ACK it, and *please* don't push it yet. :)
I had a number of issues building a guest for RHEV (I suspect they are just local issues with my client), so I ended up manually starting instances in RHEV and then attempting to stop them. That was very recent and I need to head out for the weekend for now, having had it succeed once but not having tested anything further.
I'm sharing this code to get some eyes on it, but I have no idea if it actually fixes anything yet. If someone wants to pick this task up, feel free to do so -- I don't fully understand the scope of the issues that led to the BZ's creation. If not, I'll pick it up on Monday and check in with others.
-- Matt
12 years, 3 months
[PATCH conductor] Add support for overriding default i18n file(s)
by Jason Guiditta
We need to be able to allow users to customize their install of
conductor to have their own names for things within the app. This
simple patch adds that ability. Note that it also allows for nesting
our i18n files, so we dont have to keep them in one giant file as
we do now.
---
src/config/application.rb | 3 ++-
src/config/locales/overrides/README | 7 +++++++
2 files changed, 9 insertions(+), 1 deletions(-)
create mode 100644 src/config/locales/overrides/README
diff --git a/src/config/application.rb b/src/config/application.rb
index f2295ab..9c0949f 100644
--- a/src/config/application.rb
+++ b/src/config/application.rb
@@ -81,7 +81,8 @@ module Conductor
# config.time_zone = 'Central Time (US & Canada)'
# The default locale is :en and all translations from config/locales/*.rb,yml are auto loaded.
- # config.i18n.load_path += Dir[Rails.root.join('my', 'locales', '*.{rb,yml}').to_s]
+ config.i18n.load_path += Dir[Rails.root.join('config', 'locales', '**', '*.{rb,yml}').to_s]
+ config.i18n.load_path += Dir[Rails.root.join('config', 'locales', 'overrides','**', '*.{rb,yml}').to_s]
# config.i18n.default_locale = :de
# JavaScript files you want as :defaults (application.js is always included).
diff --git a/src/config/locales/overrides/README b/src/config/locales/overrides/README
new file mode 100644
index 0000000..b1089ca
--- /dev/null
+++ b/src/config/locales/overrides/README
@@ -0,0 +1,7 @@
+If you have a custom translation for a key used in the application,
+simply create the appropriate yml files/dirs for your supported languages in
+this directory and add the keys/values you want to use instead of the
+default. See the rails guide on i18n for examples[1]. If you follow the
+pattern in the parent locales/ dir, everything shoudl 'just work'
+
+[1] http://guides.rubyonrails.org/i18n.html#organization-of-locale-files
--
1.7.7.6
12 years, 3 months
[PATCH aeolus-image-rubygem] bug 786220 - Scope images to Pool Family
by Scott Seago
https://bugzilla.redhat.com/show_bug.cgi?id=786220
---
lib/aeolus_image/model/warehouse/image.rb | 10 ++++++++++
1 files changed, 10 insertions(+), 0 deletions(-)
diff --git a/lib/aeolus_image/model/warehouse/image.rb b/lib/aeolus_image/model/warehouse/image.rb
index 7a06f0e..5bbc267 100644
--- a/lib/aeolus_image/model/warehouse/image.rb
+++ b/lib/aeolus_image/model/warehouse/image.rb
@@ -93,6 +93,10 @@ module Aeolus
@description
end
+ def environment
+ @environment
+ end
+
# Delete this image and all child objects
def delete!
begin
@@ -115,6 +119,12 @@ module Aeolus
xml = template_xml
xml.present? ? xml.xpath(path).text : ""
end
+
+ class << self
+ def by_environment(environment)
+ self.where("($environment == \"" + environment + "\")")
+ end
+ end
end
end
end
--
1.7.6.4
12 years, 3 months
Re: #791195
by Matt Wagner
This was intended to be a comment on
https://bugzilla.redhat.com/show_bug.cgi?id=791195 , but Bugzilla is
currently down. I started to look at this, but didn't get too far.
Here is the comment I tried to leave:
I started to take a look at this, but I'm mildly puzzled and I wonder if
the potential refactor can go deeper.
It looks like Taskomatic's "destroy_instance" doesn't actually have any
sort of retry logic, nor do I even see it happening in the background.
If I remove the retry logic, destroy_on_provider is left like this:
if <really long conditional>
@task = self.queue_action(self.owner, 'destroy')
raise I18n.t"instance.errors.cannot_destroy" unless @task
Taskomatic.destroy_instance(@task)
end
queue_action just creates an Event and a Task object in the database and
returns the task.
We then call Taskomatic.destroy_instance, which updates some metadata on
@task and then calls "destroy!" on the instance via Deltacloud. If that
fails, we update the task and return.
If the 500 retries existed solely to guard against the task not existing
(!), then we can indeed drop it. But I rather assumed it was meant to
guard against API errors as well.
Re: #3, I'm not sure I understand. If we just moved it to after_update,
we'd delete the instance on the provider any time its state changed. I
think the current before_destroy hook is correct. (Though we do have an
instance_observer we could use.)
12 years, 3 months
[PATCH conductor 0/1] #786844 - Reporting of deleted instances
by Matt Wagner
Hi all,
This patch diverges slightly from what we had discussed previously[1], but the aim is roughly the same: if an instance is missing on the backend provider after two consecutive checks, we can surmise that it has been deleted there and should remove it from our database. (Currently, we completely ignore it and just leave it in its previous state.)
I was getting some pretty reliable segfaults related to Nokogiri earlier when running this, though it would *eventually* succeed as it kept retrying. I'm hoping this was a transient error and not something that reliably fails for others. (Of course, more than that, I'm hoping that we get a reliable Nokogiri/Ruby setup soon.)
I had a miserable adventure trying to properly keep a counter of missed checks that persisted between runs, given that we are using this crazy blend of global variables and threads. I decided that the only real fix was to persist this information on the Instance model, but adding a counter column for this felt quite hackish, so I went ahead and just added the "vanished" state I had proposed for a longer-term fix. Currently we only use the vanished state to indicate that it failed the previous check, and if we find it on the next run, we delete the instance.
I wanted to write some RSpec tests for this, but it currently has no tests that I can find, and I don't even know how I would go about writing tests for something that doesn't use classes. We might want to think about refactoring dbomatic down the road. (IMHO)
For what it's worth, a few notes about testing this:
a.) EC2 appears to wait about an hour after termination before it stops reporting the instance in its instance list. You probably shouldn't test there unless you have a high amount of patience. (I also have a Deltacloud patch that fixes a possible error condition there.)
b.) Mock stores its "instances" in /var/tmp/deltacloud-mock-nobody/instances though I think there was some caching going on so you may have to restart deltacloud after deleting an instance there. That's probably the most expedient means of testing this.
-- Matt
[1] http://lists.fedorahosted.org/pipermail/aeolus-devel/2012-February/008735...
12 years, 3 months