taskomatic plans
by Ian Main
So I've been looking over some things and doing a lot of pondering on
how to move forward with taskomatic for deltacloud.
The plan I'm working with now is to:
- Finish the infrastructure for dependency information between tasks
and implement threading. I think it would be wise to just do this
up front so that as we implement tasks we can figure out the
dependencies and implement them. The task dependencies will work as
David and I worked out before for ovirt, using a class for each task
type, an instance for each action will be added to a list which can
be scanned by each instance to determine if it can run. I actually
don't think this will add too much difficulty given the limited number
of tasks in dcloud.
- Implement a seperate thread that deals with updates from the service
providers. This will basically be the "dbomatic" for dcloud. The
poll interval will be based on an XML config value in each driver and
will likely be fairly longish (I'm thinking 60sec or so depending).
- Plumb in a system that allows quicker updates to objects on which
recent actions have taken place. For example when an instance is
started we want to have more frequent polling of that instance until
its state changes so that we can more quickly inform the user. I am
unsure whether I will just use a seperate thread (or even be part of
the task) to do the updates or somehow notify the main update
thread. I suspect another thread will be easier..
- Work on/test task implementations to make them reliable.
- I'm not going to worry about torqeubox for now. If we do end up
using it it may be useful to use the queuing system as a method to
notify taskomatic of new tasks.
Sound reasonable to everyone?
Ian
14 years, 7 months
[PATCH] added initial auth functionality based on authlogic.
by Scott Seago
At the moment you've got to install the authlogic gem -- we will eventually need to package it as an RPM and add it as a dependency in the specfile.
Right now any user can create an account and access all pages. We aren't doing any access control beyond requiring a login account.
Signed-off-by: Scott Seago <sseago(a)redhat.com>
---
src/app/controllers/application_controller.rb | 39 ++++++++++++++++++++
src/app/controllers/instance_controller.rb | 2 +
src/app/controllers/portal_pool_controller.rb | 2 +
src/app/controllers/provider_controller.rb | 2 +
src/app/controllers/user_sessions_controller.rb | 24 ++++++++++++
src/app/controllers/users_controller.rb | 36 ++++++++++++++++++
src/app/helpers/user_sessions_helper.rb | 2 +
src/app/helpers/users_helper.rb | 2 +
src/app/models/user.rb | 3 ++
src/app/models/user_session.rb | 8 ++++
src/app/views/layouts/_header.rhtml | 4 +-
src/app/views/layouts/dcloud.rhtml | 2 +-
src/app/views/user_sessions/login.html.erb | 19 ++++++++++
src/app/views/users/_form.erb | 10 +++++
src/app/views/users/edit.html.erb | 12 ++++++
src/app/views/users/new.html.erb | 10 +++++
src/app/views/users/show.html.erb | 37 +++++++++++++++++++
src/config/environment.rb | 1 +
src/config/routes.rb | 8 +++-
src/db/migrate/20090917192602_create_users.rb | 30 +++++++++++++++
src/test/fixtures/users.yml | 7 ++++
.../functional/user_sessions_controller_test.rb | 8 ++++
src/test/functional/users_controller_test.rb | 8 ++++
src/test/unit/helpers/user_sessions_helper_test.rb | 4 ++
src/test/unit/helpers/users_helper_test.rb | 4 ++
src/test/unit/user_test.rb | 8 ++++
26 files changed, 287 insertions(+), 5 deletions(-)
create mode 100644 src/app/controllers/user_sessions_controller.rb
create mode 100644 src/app/controllers/users_controller.rb
create mode 100644 src/app/helpers/user_sessions_helper.rb
create mode 100644 src/app/helpers/users_helper.rb
create mode 100644 src/app/models/user.rb
create mode 100644 src/app/models/user_session.rb
create mode 100644 src/app/views/user_sessions/login.html.erb
create mode 100644 src/app/views/users/_form.erb
create mode 100644 src/app/views/users/edit.html.erb
create mode 100644 src/app/views/users/new.html.erb
create mode 100644 src/app/views/users/show.html.erb
create mode 100644 src/db/migrate/20090917192602_create_users.rb
create mode 100644 src/test/fixtures/users.yml
create mode 100644 src/test/functional/user_sessions_controller_test.rb
create mode 100644 src/test/functional/users_controller_test.rb
create mode 100644 src/test/unit/helpers/user_sessions_helper_test.rb
create mode 100644 src/test/unit/helpers/users_helper_test.rb
create mode 100644 src/test/unit/user_test.rb
diff --git a/src/app/controllers/application_controller.rb b/src/app/controllers/application_controller.rb
index d32f1de..99433c6 100644
--- a/src/app/controllers/application_controller.rb
+++ b/src/app/controllers/application_controller.rb
@@ -24,6 +24,8 @@
class ApplicationController < ActionController::Base
# FIXME: not sure what we're doing aobut service layer w/ deltacloud
include ApplicationService
+ filter_parameter_logging :password, :password_confirmation
+ helper_method :current_user_session, :current_user
init_gettext "ovirt"
layout :choose_layout
@@ -186,4 +188,41 @@ class ApplicationController < ActionController::Base
end
return hash
end
+
+ def current_user_session
+ return @current_user_session if defined?(@current_user_session)
+ @current_user_session = UserSession.find
+ end
+
+ def current_user
+ return @current_user if defined?(@current_user)
+ @current_user = current_user_session && current_user_session.user
+ end
+
+ def require_user
+ unless current_user
+ store_location
+ flash[:notice] = "You must be logged in to access this page"
+ redirect_to login_url
+ return false
+ end
+ end
+
+ def require_no_user
+ if current_user
+ store_location
+ flash[:notice] = "You must be logged out to access this page"
+ redirect_to account_url
+ return false
+ end
+ end
+
+ def store_location
+ session[:return_to] = request.request_uri
+ end
+
+ def redirect_back_or_default(default)
+ redirect_to(session[:return_to] || default)
+ session[:return_to] = nil
+ end
end
diff --git a/src/app/controllers/instance_controller.rb b/src/app/controllers/instance_controller.rb
index 398fdde..59e19ed 100644
--- a/src/app/controllers/instance_controller.rb
+++ b/src/app/controllers/instance_controller.rb
@@ -1,4 +1,6 @@
class InstanceController < ApplicationController
+ before_filter :require_user
+
def index
end
diff --git a/src/app/controllers/portal_pool_controller.rb b/src/app/controllers/portal_pool_controller.rb
index 49c2d90..9f41cf6 100644
--- a/src/app/controllers/portal_pool_controller.rb
+++ b/src/app/controllers/portal_pool_controller.rb
@@ -1,4 +1,6 @@
class PortalPoolController < ApplicationController
+ before_filter :require_user
+
def index
render :action => 'new'
end
diff --git a/src/app/controllers/provider_controller.rb b/src/app/controllers/provider_controller.rb
index 9b5e841..e0a96b7 100644
--- a/src/app/controllers/provider_controller.rb
+++ b/src/app/controllers/provider_controller.rb
@@ -1,4 +1,6 @@
class ProviderController < ApplicationController
+ before_filter :require_user
+
def index
render :action => 'new'
end
diff --git a/src/app/controllers/user_sessions_controller.rb b/src/app/controllers/user_sessions_controller.rb
new file mode 100644
index 0000000..c6cdd80
--- /dev/null
+++ b/src/app/controllers/user_sessions_controller.rb
@@ -0,0 +1,24 @@
+class UserSessionsController < ApplicationController
+ before_filter :require_no_user, :only => [:new, :create]
+ before_filter :require_user, :only => :destroy
+
+ def login
+ @user_session = UserSession.new
+ end
+
+ def create
+ @user_session = UserSession.new(params[:user_session])
+ if @user_session.save
+ flash[:notice] = "Login successful!"
+ redirect_back_or_default account_url
+ else
+ render :action => :new
+ end
+ end
+
+ def logout
+ current_user_session.destroy
+ flash[:notice] = "Logout successful!"
+ redirect_back_or_default login_url
+ end
+end
diff --git a/src/app/controllers/users_controller.rb b/src/app/controllers/users_controller.rb
new file mode 100644
index 0000000..9fd4212
--- /dev/null
+++ b/src/app/controllers/users_controller.rb
@@ -0,0 +1,36 @@
+class UsersController < ApplicationController
+ before_filter :require_no_user, :only => [:new, :create]
+ before_filter :require_user, :only => [:show, :edit, :update]
+
+ def new
+ @user = User.new
+ end
+
+ def create
+ @user = User.new(params[:user])
+ if @user.save
+ flash[:notice] = "User registered!"
+ redirect_back_or_default account_url
+ else
+ render :action => :new
+ end
+ end
+
+ def show
+ @user = @current_user
+ end
+
+ def edit
+ @user = @current_user
+ end
+
+ def update
+ @user = @current_user # makes our views "cleaner" and more consistent
+ if @user.update_attributes(params[:user])
+ flash[:notice] = "User updated!"
+ redirect_to account_url
+ else
+ render :action => :edit
+ end
+ end
+end
diff --git a/src/app/helpers/user_sessions_helper.rb b/src/app/helpers/user_sessions_helper.rb
new file mode 100644
index 0000000..2018402
--- /dev/null
+++ b/src/app/helpers/user_sessions_helper.rb
@@ -0,0 +1,2 @@
+module UserSessionsHelper
+end
diff --git a/src/app/helpers/users_helper.rb b/src/app/helpers/users_helper.rb
new file mode 100644
index 0000000..2310a24
--- /dev/null
+++ b/src/app/helpers/users_helper.rb
@@ -0,0 +1,2 @@
+module UsersHelper
+end
diff --git a/src/app/models/user.rb b/src/app/models/user.rb
new file mode 100644
index 0000000..04c7d17
--- /dev/null
+++ b/src/app/models/user.rb
@@ -0,0 +1,3 @@
+class User < ActiveRecord::Base
+ acts_as_authentic
+end
diff --git a/src/app/models/user_session.rb b/src/app/models/user_session.rb
new file mode 100644
index 0000000..6f99d26
--- /dev/null
+++ b/src/app/models/user_session.rb
@@ -0,0 +1,8 @@
+class UserSession < Authlogic::Session::Base
+ # gettext_activerecord defines a gettext method for the activerecord
+ # Validations module. Authlogic uses these Validations also but does
+ # not define the gettext method. We define it here instead.
+ def gettext(str)
+ GetText._(str)
+ end
+end
diff --git a/src/app/views/layouts/_header.rhtml b/src/app/views/layouts/_header.rhtml
index 76f9e73..f85510f 100644
--- a/src/app/views/layouts/_header.rhtml
+++ b/src/app/views/layouts/_header.rhtml
@@ -1,10 +1,10 @@
<div class="header_logo"><%= image_tag "dcloud.png" %></div>
<div class="header_info">
- <div id="hi-username">Hi, <%= @user %></div>
+ <div id="hi-username"><%= "Hi, " + @current_user.login if defined? @current_user %></div>
<form method="POST" id="search-form" action="<%= url_for :controller => "search", :action => 'results' %>">
<input id="textfield_effect" name="terms" value="Search" onkeypress="" onfocus="if( this.value == this.defaultValue ) this.value='';" type="text">
<input id="searchbox-button" src="<%= image_path "icon_search.png"%>" title="Search" type="image"> |
</form>
- <%= link_to 'Log out', { :controller => "login", :action => "logout"}%>
+ <%= link_to 'Log out', { :controller => "user_sessions", :action => "logout"}%>
</div>
\ No newline at end of file
diff --git a/src/app/views/layouts/dcloud.rhtml b/src/app/views/layouts/dcloud.rhtml
index 725f630..53dfc67 100644
--- a/src/app/views/layouts/dcloud.rhtml
+++ b/src/app/views/layouts/dcloud.rhtml
@@ -69,7 +69,7 @@
</div>
<div id="side">
- <%= render :partial => '/layouts/main_nav' %>
+ <%= render :partial => '/layouts/main_nav' if defined? @current_user %>
</div>
<div id="tabs-and-content-container">
diff --git a/src/app/views/user_sessions/login.html.erb b/src/app/views/user_sessions/login.html.erb
new file mode 100644
index 0000000..1322fa2
--- /dev/null
+++ b/src/app/views/user_sessions/login.html.erb
@@ -0,0 +1,19 @@
+<div class="dcloud_form">
+ <%= error_messages_for 'user_session' %>
+ <h2>Login</h2>
+
+ <% form_for @user_session, :url => user_session_path do |f| %>
+ <%= f.error_messages %>
+ <%= f.label :login %><br />
+ <%= f.text_field :login %><br />
+ <br />
+ <%= f.label :password %><br />
+ <%= f.password_field :password %><br />
+ <br />
+ <%= f.check_box :remember_me %><%= f.label :remember_me %><br />
+ <br />
+ <%= f.submit "Login" %>
+ <% end %>
+ <%= link_to 'Register', {:controller => 'users', :action => 'new'}, :class => 'actionlink' %>
+
+</div>
diff --git a/src/app/views/users/_form.erb b/src/app/views/users/_form.erb
new file mode 100644
index 0000000..a278d9a
--- /dev/null
+++ b/src/app/views/users/_form.erb
@@ -0,0 +1,10 @@
+<%= form.label :login %><br />
+<%= form.text_field :login %><br />
+<%= form.label :email %><br />
+<%= form.text_field :email %><br />
+<br />
+<%= form.label :password, form.object.new_record? ? nil : "Change password" %><br />
+<%= form.password_field :password %><br />
+<br />
+<%= form.label :password_confirmation %><br />
+<%= form.password_field :password_confirmation %><br />
diff --git a/src/app/views/users/edit.html.erb b/src/app/views/users/edit.html.erb
new file mode 100644
index 0000000..7c9db32
--- /dev/null
+++ b/src/app/views/users/edit.html.erb
@@ -0,0 +1,12 @@
+<div class="dcloud_form">
+ <%= error_messages_for 'user' %>
+ <h2>Edit My Profile</h2>
+
+ <% form_for @user, :url => account_path do |f| %>
+ <%= f.error_messages %>
+ <%= render :partial => "form", :object => f %>
+ <%= f.submit "Update" %>
+ <% end %>
+
+ <br /><%= link_to "My Profile", account_path %>
+</div>
diff --git a/src/app/views/users/new.html.erb b/src/app/views/users/new.html.erb
new file mode 100644
index 0000000..681d9b0
--- /dev/null
+++ b/src/app/views/users/new.html.erb
@@ -0,0 +1,10 @@
+<div class="dcloud_form">
+ <%= error_messages_for 'user' %>
+ <h2>Register</h2>
+
+ <% form_for @user, :url => account_path do |f| %>
+ <%= f.error_messages %>
+ <%= render :partial => "form", :object => f %>
+ <%= f.submit "Register" %>
+ <% end %>
+</div>
diff --git a/src/app/views/users/show.html.erb b/src/app/views/users/show.html.erb
new file mode 100644
index 0000000..e4e6e67
--- /dev/null
+++ b/src/app/views/users/show.html.erb
@@ -0,0 +1,37 @@
+<p>
+ <b>Login:</b>
+ <%=h @user.login %>
+</p>
+
+<p>
+ <b>Login count:</b>
+ <%=h @user.login_count %>
+</p>
+
+<p>
+ <b>Last request at:</b>
+ <%=h @user.last_request_at %>
+</p>
+
+<p>
+ <b>Last login at:</b>
+ <%=h @user.last_login_at %>
+</p>
+
+<p>
+ <b>Current login at:</b>
+ <%=h @user.current_login_at %>
+</p>
+
+<p>
+ <b>Last login ip:</b>
+ <%=h @user.last_login_ip %>
+</p>
+
+<p>
+ <b>Current login ip:</b>
+ <%=h @user.current_login_ip %>
+</p>
+
+
+<%= link_to 'Edit', edit_account_path %>
diff --git a/src/config/environment.rb b/src/config/environment.rb
index 5098306..9a3433d 100644
--- a/src/config/environment.rb
+++ b/src/config/environment.rb
@@ -42,6 +42,7 @@ Rails::Initializer.run do |config|
# config.gem "aws-s3", :lib => "aws/s3"
config.gem "gettext", :lib => "gettext_rails"
config.gem "gettext", :lib => "gettext_activerecord"
+ config.gem "authlogic"
# Only load the plugins named here, in the order given. By default, all plugins
# in vendor/plugins are loaded in alphabetical order.
diff --git a/src/config/routes.rb b/src/config/routes.rb
index ae9bc75..d2a171b 100644
--- a/src/config/routes.rb
+++ b/src/config/routes.rb
@@ -33,8 +33,12 @@ ActionController::Routing::Routes.draw do |map|
# -- just remember to delete public/index.html.
map.default '', :controller => 'provider'
- map.login '/login', :controller => 'resources', :action => 'list'
- map.logout '/logout', :controller => 'resources', :action => 'list'
+ map.resource :user_session
+ map.root :controller => "user_sessions", :action => "login"
+ map.login 'login', :controller => "user_sessions", :action => "login"
+ map.resource :account, :controller => "users"
+ map.resources :users
+
# Allow downloading Web Service WSDL as a file with an extension
# instead of a file named 'wsdl'
diff --git a/src/db/migrate/20090917192602_create_users.rb b/src/db/migrate/20090917192602_create_users.rb
new file mode 100644
index 0000000..1bcf0d4
--- /dev/null
+++ b/src/db/migrate/20090917192602_create_users.rb
@@ -0,0 +1,30 @@
+class CreateUsers < ActiveRecord::Migration
+ def self.up
+ create_table :users do |t|
+ t.string :login, :null => false
+ t.string :email, :null => false
+ t.string :crypted_password, :null => false
+ t.string :password_salt, :null => false
+ t.string :persistence_token, :null => false
+ t.string :single_access_token, :null => false
+ t.string :perishable_token, :null => false
+ # Magic columns, just like ActiveRecord's created_at and updated_at.
+ # These are automatically maintained by Authlogic if they are present.
+ t.integer :login_count, :null => false, :default => 0
+ t.integer :failed_login_count, :null => false, :default => 0
+ t.datetime :last_request_at
+ t.datetime :current_login_at
+ t.datetime :last_login_at
+ t.string :current_login_ip
+ t.string :last_login_ip
+ t.timestamps
+ end
+ add_index :users, :login
+ add_index :users, :persistence_token
+ add_index :users, :last_request_at
+ end
+
+ def self.down
+ drop_table :users
+ end
+end
diff --git a/src/test/fixtures/users.yml b/src/test/fixtures/users.yml
new file mode 100644
index 0000000..5bf0293
--- /dev/null
+++ b/src/test/fixtures/users.yml
@@ -0,0 +1,7 @@
+# Read about fixtures at http://ar.rubyonrails.org/classes/Fixtures.html
+
+# one:
+# column: value
+#
+# two:
+# column: value
diff --git a/src/test/functional/user_sessions_controller_test.rb b/src/test/functional/user_sessions_controller_test.rb
new file mode 100644
index 0000000..5024b23
--- /dev/null
+++ b/src/test/functional/user_sessions_controller_test.rb
@@ -0,0 +1,8 @@
+require 'test_helper'
+
+class UserSessionsControllerTest < ActionController::TestCase
+ # Replace this with your real tests.
+ test "the truth" do
+ assert true
+ end
+end
diff --git a/src/test/functional/users_controller_test.rb b/src/test/functional/users_controller_test.rb
new file mode 100644
index 0000000..c3db123
--- /dev/null
+++ b/src/test/functional/users_controller_test.rb
@@ -0,0 +1,8 @@
+require 'test_helper'
+
+class UsersControllerTest < ActionController::TestCase
+ # Replace this with your real tests.
+ test "the truth" do
+ assert true
+ end
+end
diff --git a/src/test/unit/helpers/user_sessions_helper_test.rb b/src/test/unit/helpers/user_sessions_helper_test.rb
new file mode 100644
index 0000000..20dabdf
--- /dev/null
+++ b/src/test/unit/helpers/user_sessions_helper_test.rb
@@ -0,0 +1,4 @@
+require 'test_helper'
+
+class UserSessionsHelperTest < ActionView::TestCase
+end
diff --git a/src/test/unit/helpers/users_helper_test.rb b/src/test/unit/helpers/users_helper_test.rb
new file mode 100644
index 0000000..96af37a
--- /dev/null
+++ b/src/test/unit/helpers/users_helper_test.rb
@@ -0,0 +1,4 @@
+require 'test_helper'
+
+class UsersHelperTest < ActionView::TestCase
+end
diff --git a/src/test/unit/user_test.rb b/src/test/unit/user_test.rb
new file mode 100644
index 0000000..a64d2d3
--- /dev/null
+++ b/src/test/unit/user_test.rb
@@ -0,0 +1,8 @@
+require 'test_helper'
+
+class UserTest < ActiveSupport::TestCase
+ # Replace this with your real tests.
+ test "the truth" do
+ assert true
+ end
+end
--
1.6.2.5
14 years, 7 months
Re: [deltacloud-devel] [Fwd: Thoughts on Deltacloud monitoring - now called Spectre]
by jason.guiditta
> OK, so I've been mulling this over after some talks with Jay last week.
> Thought I'd put something in writing. This isn't intended to be a
> solution,
> rather the intent is to capture thoughts on what may be needed for the
> design
> and then we can discuss the best implementation.
>
> It is not clear to me where to draw the line for the component responsible
> for
> various pieces of functionality. I will list *most everything* I can think
> of here
> and the ensuing discussions can help sort out where that functionality
> should
> lie.
>
> In lieu of hard requirements, I have listed some assumptions that I'm
> starting
> with to frame the discussion. If you feel that any of these are incorrect,
> please speak up so we don't spend too much time on bad assumptions.
>
>
> Assumptions
> -----------
> 1) For the sake of this discussion, I am listing a lot of functionality
> that may not be in the initial release. Best to consider it now and
> make
> sure the design can handle it. Also some of this function will most
> likely
> not end up in the monitoring component, but the functionality needs to
> be discussed, if for no reason other to explain what the trade offs
> are...
>
>
> 2) the use of this monitoring data will need to cover one to many users.
> a) There is no real upper bound to the number of users at this point in
> time. However we should decide on the order of magnitude (1K, 100K,
> etc)
> b) must be able to provide data to each individual user concurrently.
> c) Must provide the capability to arrange users and aggregate data in
> groups and in a hierarchical manner. (This would an example of
> I am stating some functionality that some Deltacloud component
> should
> provide, I mention it here as it could impact the design even if the
> "ownership" ends up elsewhere. Best to decide "where does this
> functionality belong?" in this discussion and understand its
> ramifications.)
>
[jg] I believe this one is outside the scope of spectre, and would be
handled
by pools/users/permissions in deltacloud portal (though perhaps there
could be some simplified representation here of this, if it makes sense.)
I agree that it may have an impact on our design here though.
>
> 3) Billing does not imply a simple end of the month run, but rather the
> ability to monitor usage and calculate running "balance" (but does
> not need to consider payments outside of "reset" functionality.)
> a) down the road we will want to be able to provide some enforcement
> of spending (Jay is only budgeted to spend $100, don't let him
> go over it)
> As Jay has pointed out, this is moist likely outside the scope of
> Spectre,
> except for:(but this is why we need to understand the entire problem)
> 1. Spectre needs to collect whatever data portal requires for billing
> 2. Archiving of this information beyond the time a provider may do so
> needs to be accounted for by Spectre and configurable per provider
>
> 4) Data collection from the various clouds will be a "pull" event. In my
> mind, pull is preferred because it allow the app to control the load it
> generates. With a push model, it could leave to server overload.
>
[jg] I had not considered this as a potential issue. There had been some
talk of
hoping for push to be supported by some vendors in the future. Maybe it is
just my web background, but polling feels less efficient to me.
a) It also needs to be dynamic. So as the user adds additional VM's
> the data for those also is gathered. So "poll" is still technically
> what is happening,we just need a dynamic polling mechanism.
>
>
[jg] Some kind of trigger may be a good idea, but could we not also get new
vms and such on the next scheduled 'poll' of information?
5) For monitoring, data will be collected on all "active" Deltacloud users.
> Here "active" user implies that the user is logged in; or someone is
> monitoring the group that the user is part of.
> a) This is just a "stake in the ground. We really should investigate
> whether its more efficient to get all the data all the time or only
> as needed.
> b) if we only get data as needed, we will need some extra logic to
> determine if there are "holes" in the data and catch up as needed.
> ** Jay has mentioned that some flexibility should be built in here,
> there
> may be providers who charge for accessing the data based on the access
> rate.
>
> 6) In general. I can see a need for "business logic" specific to
> each vendors Cloud. This will include billing rates, types of data
> available, etc. I'm wondering if there is a "global" way to handle
> this in Deltacloud already...
> a) At some level, the data retrieval / monitoring aspects will need to
> have cloud specific knowledge. For instance, CloudWatch keeps the
> data
> for two weeks. Exactly where in the design this information needs to
> be
> kept still needs to be determined.
> My main goal for thinking of a business layer is to allow for a vendor
> specific "modules" to be created and used w/o any changes to the
> underlying
> stats layer.
>
[jg] Agreed, I see this potentially varying a vast amount from one provider
to the next. However, I think the business logic for spectre should be
limited to retrieving the information available, with any
provider-specific logic being more for aggregation or other data views
(though perhaps there could be provider-specific modules on the retrieval
side of the api).
Billing I see being handled in the portal (or possibly even the
framework, since that has provider-specific drivers also, so it could
encapsulate billing logic in a common api that the portal could use).
> 7) It is assumed that any Red Hat cloud product will be treated like JAC
> (Just Another Cloud).
> 8) Users, this could be one of the more complicated pieces to get right.
> My initial thought is a single user can have many different cloud
> associations. However its not clear if we need to track the case where
> a single cloud account could be shared between users and we need to
> account
> for each users usage. Again, please keep in mind that this document
> is intended to cover all the functionality we see down the road, so
> it may not an initial release goal but we may need to plan for it.
>
> So my thoughts are that the user management tracking, etc is done under
> some other Deltacloud component and Spectre will just need to be able
> store and retrieve data based on some type of unique user / cloud
> identifier. (hey, its hard to pull implementation out of design)
>
[jg]To help clarify (and make sure I am right), I believe with what we have
so far on
the portal side of the design, we would have cloud account X. As far as the
provider
is concerned, X is the only account. Portal, however, may have N accounts
associated
with X. Each of these accounts can have 0 or more VMs (and possibly other
things), so
on the monitoring side, if we can find things based on account X and some
identifier for
the actual item requested, I think that will get us what we need w/o
building in too much
business logic that is already handled elsewhere.
> However we still need to define things like a data request. Should we
> require the caller to be very specific about the user cloud
> combinations?
> Ans lots of other stuff like that...
>
> 9) Spectre will need to be able to provide data to a caller for a specific
> cloud
> in a manner that is more efficient than recursively walking a list
> of users. Thus Spectre must be able to retrieve data based on user or
> cloud or a combination of both.
> 10) I would expect that the Spectre functionality is provided in a manner
> that would allow for it to be distributed, federated or as a "module"
> to some other application. My initial thought would be a web service
> type of architecture, but lets not ahead of ourselves...
>
[jg] Agreed on both points.
>
>
>
> Straw man proposal (http://en.wikipedia.org/wiki/Straw_man_proposal)
> ------------------
>
> In general, we need to define something that stores and retrieves data,
> that has clear APIs for both the top and bottom layers. For this
> discussion, I would propose that we view Spectre with three distinct
> layers:
> 1) On the "top" level, we need to define methods for data insertion and
> retrieval.
> 2) The middle layer is basically the traffic cop, it is also the least
> defined
> 3) The bottom level provides the interaction to the data store(s). For the
> bottom level, we need to define an API that we will call to interact
> with the data store(s).
> I will dive a little deeper into these layers below.
>
> I am assuming that specific interfaces for both the top and bottom layers
> will need to be created in order to interface to different cloud vendors
> or data stores. For the sake of discussion, I will refer to these as
> "modules" although the intent is to help facilitate the discussion,
> not to imply a specific implementation.
>
> One of the main goals of the resultant architecture is that a new vendor
> specific module can be created for either the top or bottom layers without
> impacting design / code of the middle Stats layer.
>
>
> Lets try this bottom up.
> -------------------------
> The goal of defining an API at the bottom this stack is to allow for
> different
> data stores to be used. By clearly defining the API, anyone should be able
> to
> write their own interface layer to the data store. Examples of data stores
> would
> be RRD, memcache, mySQL, etc.
>
> [jg] I think I agree with this sentiment, but believe you are proposing
one
level 'down' from what I originally thought. So, we already have the
concept of provider 'drivers' for collecting the info. Lets say, for
the sake of argument that we extended our existing 'stats' module to
include write functionality. I think the driver would then call stats
to save whatever it had collected (maybe too implementation-specific,
but not sure how else to describe). Stats in turn would call the actual
'save' against whatever datastore is enabled (this being the additional
layer I think you are describing). Is this what you meant?
This would seem to imply that the data store specific module be able to
> translate
> a request for "Marks EC2 usage data from last week" into the specific
> language
> needed to access its data store. It will need to format the response from
> its
> data store back into a common (but yet to be defined) intermediate data
> format
> and passed "up the stack".
>
[jg] I was not really thinking of multiple potential languages, mainly just
ruby,
but this seems like a reasonable requirement as long as we keep it as simple
as
possible so we don't get bogged down in implementation. First thought would
be
xml, as most languages can handle that.
> Configuration data (User credentials, db name, directory structure, etc. )
> - Two ways to go here.
> a) Each data store module at the should be responsible for the required
> configuration information. This should be loaded by the module when it
> is
> initialized. In other words, the data service should not need to know
> any
> of the details.
> b) Alternatively, the API should include a "required" set of calls that the
> service can call to retrieve the configuration data required and store
> the information. (The module indicates it needs dbname, userid, pwd,
> etc)
> This way the overall service would still be used for controlling the
> configuration. The parameters would need to be supplied by the module
> so
> that the data service could remain data store agnostic.
>
> Not sure if we should allow the use of multiple modules simultaneously. It
> would be nice to use a memcache-like mechanism for some things to avoid
> more
> expensive lookups.... This could be something like a write through cache.
> Lots to discuss...
>
[jg] Initially I thought just one for storage, but upon further reflection,
it could be really
nice to be able to 'layer' this, imo. So perhaps check a cache of some sort
first, then
move down to true storage layer if data is not found.
>
>
>
> Middle Layer
> ------------
>
> This is the traffic cop of the data service. It could be fairly
> lightweight,
> just taking input, validating it and and passing it through. For data
> requests, it could some coalescing of requests; provide the ability to
> look in a local data cache, etc.
>
> One major area that will need to be looked at is security. This layer
> would seem to be the location where any security would be implemented.
> Not sure how much we want or need.
>
> If this layer does any work on data, it will be the "intermediate format"
>
>
> Top Layer
> ---------
>
> This is layer that is called to store or retrieve data. My initial thoughts
> are that while these two types of functionality are at the same level, they
> are drastically different in what they do so I'll treat them separately.
>
>
> [jg] One thought, since this is what everything would call, if we have auth
in
the middle layer, then top needs to accept credentials and pass them down
to middle layer.
Data Input API ("mystery data collection module")
> --------------
>
> This provides an API for storing data in the data service. It will be
> called
> by the cloud specific module. My thoughts are that these modules will be
> used to pull data from the cloud and push it into the data store.
>
> It is the responsibility of the module to take the data from the cloud
> and translate it into the intermediate data format.
>
> It should be possible for many different modules to access this API in
> parallel.
>
>
> Data Retrieval
> ---------------
> There are clearly needs to provide data back to a caller in different
> formats.
> This API will need to support that. I think the main design decision would
> be how to structure this. I am almost thinking that sticking with the
> module
> design will work well. This will allow end users to add there own modules.
> It
> will also allow for mechanisms to allow more levels of data processing to
> be
> added w/o polluting the main API. For instance, you could create a module
> to
> compute rolling averages.
>
> So this layer will need to some thought to choose the right solution for
> future
> needs and maintainability. (Hint, its easier to drop support for a module
> than to
> change the main API down the road...)
>
> This level must be able to translate a request for "Marks EC2 usage data
> from last week" into the intermediate language.
>
> It should go w/o saying that this API must support concurrent access.
>
>
> Higher Level questions
> -----------------------
> So some design questions that will hopefully lead us to pick the right
> solution...
> 1) do we need to provide synch APIs, asynch APIs or both ?
>
[jg] my inclination would be both.
>
> 2) do we support a data stream vs "one shot" (for instance, do we
> provide a call to allow a continuous stream of data in or out Spectre?
>
[jg] I think this would be nice to allow, so all clients do not have to
poll.
>
> 3) how long should we be storing data ?
>
>
> Next Steps
> -------------
> 1) Start discussions based on the above content
>
> 2) Identify vendors and investigate the requirements for getting data from
> different clouds. (EC2, vmware, RHEV-M, rackspace ?)
>
> 3) look at high level questions, build requirements.
>
>
> _______________________________________________
> spectre-devel mailing list
> spectre-devel(a)lists.fedorahosted.org
> https://fedorahosted.org/mailman/listinfo/spectre-devel
>
14 years, 7 months
Thoughts on Deltacloud monitoring - now called Spectre
by Mark Wagner
OK, so I've been mulling this over after some talks with Jay last week.
Thought I'd put something in writing. This isn't intended to be a solution,
rather the intent is to capture thoughts on what may be needed for the design
and then we can discuss the best implementation.
It is not clear to me where to draw the line for the component responsible for
various pieces of functionality. I will list *most everything* I can think of here
and the ensuing discussions can help sort out where that functionality should
lie.
In lieu of hard requirements, I have listed some assumptions that I'm starting
with to frame the discussion. If you feel that any of these are incorrect,
please speak up so we don't spend too much time on bad assumptions.
Assumptions
-----------
1) For the sake of this discussion, I am listing a lot of functionality
that may not be in the initial release. Best to consider it now and make
sure the design can handle it. Also some of this function will most likely
not end up in the monitoring component, but the functionality needs to
be discussed, if for no reason other to explain what the trade offs are...
2) the use of this monitoring data will need to cover one to many users.
a) There is no real upper bound to the number of users at this point in
time. However we should decide on the order of magnitude (1K, 100K, etc)
b) must be able to provide data to each individual user concurrently.
c) Must provide the capability to arrange users and aggregate data in
groups and in a hierarchical manner. (This would an example of
I am stating some functionality that some Deltacloud component should
provide, I mention it here as it could impact the design even if the
"ownership" ends up elsewhere. Best to decide "where does this
functionality belong?" in this discussion and understand its ramifications.)
3) Billing does not imply a simple end of the month run, but rather the
ability to monitor usage and calculate running "balance" (but does
not need to consider payments outside of "reset" functionality.)
a) down the road we will want to be able to provide some enforcement
of spending (Jay is only budgeted to spend $100, don't let him
go over it)
As Jay has pointed out, this is moist likely outside the scope of Spectre,
except for:(but this is why we need to understand the entire problem)
1. Spectre needs to collect whatever data portal requires for billing
2. Archiving of this information beyond the time a provider may do so
needs to be accounted for by Spectre and configurable per provider
4) Data collection from the various clouds will be a "pull" event. In my
mind, pull is preferred because it allow the app to control the load it
generates. With a push model, it could leave to server overload.
a) It also needs to be dynamic. So as the user adds additional VM's
the data for those also is gathered. So "poll" is still technically
what is happening,we just need a dynamic polling mechanism.
5) For monitoring, data will be collected on all "active" Deltacloud users.
Here "active" user implies that the user is logged in; or someone is
monitoring the group that the user is part of.
a) This is just a "stake in the ground. We really should investigate
whether its more efficient to get all the data all the time or only
as needed.
b) if we only get data as needed, we will need some extra logic to
determine if there are "holes" in the data and catch up as needed.
** Jay has mentioned that some flexibility should be built in here, there
may be providers who charge for accessing the data based on the access
rate.
6) In general. I can see a need for "business logic" specific to
each vendors Cloud. This will include billing rates, types of data
available, etc. I'm wondering if there is a "global" way to handle
this in Deltacloud already...
a) At some level, the data retrieval / monitoring aspects will need to
have cloud specific knowledge. For instance, CloudWatch keeps the data
for two weeks. Exactly where in the design this information needs to be
kept still needs to be determined.
My main goal for thinking of a business layer is to allow for a vendor
specific "modules" to be created and used w/o any changes to the underlying
stats layer.
7) It is assumed that any Red Hat cloud product will be treated like JAC
(Just Another Cloud).
8) Users, this could be one of the more complicated pieces to get right.
My initial thought is a single user can have many different cloud
associations. However its not clear if we need to track the case where
a single cloud account could be shared between users and we need to account
for each users usage. Again, please keep in mind that this document
is intended to cover all the functionality we see down the road, so
it may not an initial release goal but we may need to plan for it.
So my thoughts are that the user management tracking, etc is done under
some other Deltacloud component and Spectre will just need to be able
store and retrieve data based on some type of unique user / cloud
identifier. (hey, its hard to pull implementation out of design)
However we still need to define things like a data request. Should we
require the caller to be very specific about the user cloud combinations?
Ans lots of other stuff like that...
9) Spectre will need to be able to provide data to a caller for a specific cloud
in a manner that is more efficient than recursively walking a list
of users. Thus Spectre must be able to retrieve data based on user or
cloud or a combination of both.
10) I would expect that the Spectre functionality is provided in a manner
that would allow for it to be distributed, federated or as a "module"
to some other application. My initial thought would be a web service
type of architecture, but lets not ahead of ourselves...
Straw man proposal (http://en.wikipedia.org/wiki/Straw_man_proposal)
------------------
In general, we need to define something that stores and retrieves data,
that has clear APIs for both the top and bottom layers. For this
discussion, I would propose that we view Spectre with three distinct layers:
1) On the "top" level, we need to define methods for data insertion and retrieval.
2) The middle layer is basically the traffic cop, it is also the least defined
3) The bottom level provides the interaction to the data store(s). For the
bottom level, we need to define an API that we will call to interact
with the data store(s).
I will dive a little deeper into these layers below.
I am assuming that specific interfaces for both the top and bottom layers
will need to be created in order to interface to different cloud vendors
or data stores. For the sake of discussion, I will refer to these as
"modules" although the intent is to help facilitate the discussion,
not to imply a specific implementation.
One of the main goals of the resultant architecture is that a new vendor
specific module can be created for either the top or bottom layers without
impacting design / code of the middle Stats layer.
Lets try this bottom up.
-------------------------
The goal of defining an API at the bottom this stack is to allow for different
data stores to be used. By clearly defining the API, anyone should be able to
write their own interface layer to the data store. Examples of data stores would
be RRD, memcache, mySQL, etc.
This would seem to imply that the data store specific module be able to translate
a request for "Marks EC2 usage data from last week" into the specific language
needed to access its data store. It will need to format the response from its
data store back into a common (but yet to be defined) intermediate data format
and passed "up the stack".
Configuration data (User credentials, db name, directory structure, etc. )
- Two ways to go here.
a) Each data store module at the should be responsible for the required
configuration information. This should be loaded by the module when it is
initialized. In other words, the data service should not need to know any
of the details.
b) Alternatively, the API should include a "required" set of calls that the
service can call to retrieve the configuration data required and store
the information. (The module indicates it needs dbname, userid, pwd, etc)
This way the overall service would still be used for controlling the
configuration. The parameters would need to be supplied by the module so
that the data service could remain data store agnostic.
Not sure if we should allow the use of multiple modules simultaneously. It
would be nice to use a memcache-like mechanism for some things to avoid more
expensive lookups.... This could be something like a write through cache.
Lots to discuss...
Middle Layer
------------
This is the traffic cop of the data service. It could be fairly lightweight,
just taking input, validating it and and passing it through. For data
requests, it could some coalescing of requests; provide the ability to
look in a local data cache, etc.
One major area that will need to be looked at is security. This layer
would seem to be the location where any security would be implemented.
Not sure how much we want or need.
If this layer does any work on data, it will be the "intermediate format"
Top Layer
---------
This is layer that is called to store or retrieve data. My initial thoughts
are that while these two types of functionality are at the same level, they
are drastically different in what they do so I'll treat them separately.
Data Input API ("mystery data collection module")
--------------
This provides an API for storing data in the data service. It will be called
by the cloud specific module. My thoughts are that these modules will be
used to pull data from the cloud and push it into the data store.
It is the responsibility of the module to take the data from the cloud
and translate it into the intermediate data format.
It should be possible for many different modules to access this API in
parallel.
Data Retrieval
---------------
There are clearly needs to provide data back to a caller in different formats.
This API will need to support that. I think the main design decision would
be how to structure this. I am almost thinking that sticking with the module
design will work well. This will allow end users to add there own modules. It
will also allow for mechanisms to allow more levels of data processing to be
added w/o polluting the main API. For instance, you could create a module to
compute rolling averages.
So this layer will need to some thought to choose the right solution for future
needs and maintainability. (Hint, its easier to drop support for a module than to
change the main API down the road...)
This level must be able to translate a request for "Marks EC2 usage data
from last week" into the intermediate language.
It should go w/o saying that this API must support concurrent access.
Higher Level questions
-----------------------
So some design questions that will hopefully lead us to pick the right solution...
1) do we need to provide synch APIs, asynch APIs or both ?
2) do we support a data stream vs "one shot" (for instance, do we
provide a call to allow a continuous stream of data in or out Spectre?
3) how long should we be storing data ?
Next Steps
-------------
1) Start discussions based on the above content
2) Identify vendors and investigate the requirements for getting data from
different clouds. (EC2, vmware, RHEV-M, rackspace ?)
3) look at high level questions, build requirements.
14 years, 7 months
Unsupported operations... what to do...
by Michael Neale
Hi All. In implementing the Rackspace driver, I noticed that I can't
stop a server (only delete it) - so is there some way we should allow
the driver to throw/return something that says that operation is not
supported at all for a given fabric? (until we find a way) ? (or have
a list returned of what is allowed, to the Framework?).
Also - I noticed that in the driver code there is storage volumes and
snapshots (once again, not directly supported, although there are a
few options) - this is not mentioned in the page on building a driver
in the doco (just a missing bit? or is it optional/non important
anyway ? In rackspace, all instances are persistent with backups,
slightly different to perhaps EC2 and other fabrics).
Thoughts?
--
Michael D Neale
home: www.michaelneale.net
blog: michaelneale.blogspot.com
14 years, 7 months
Scope of Deltacloud (cooperation with REST-*)
by Bob McWhirter
Howdy guys--
First, congrats to everyone for doing awesome with Deltacloud and the
launch. Jeremy, the website looks fantastic. Kim and the Video team
also made us all look good.
I'd like us to consider what the "end-game" of Deltacloud as an API
looks like. Where do we draw the line of what is and is not
Deltacloud, from an API viewpoint?
I personally think that we might want to leverage the community
that'll also be building around REST-* (http://jboss.org/reststar).
There, the intent it to build RESTful APIs (but not necessarily
implementations) for things like messaging, transactions, and storage.
Basically, for things like Amazon's Simple Queueing Service (SQS) and
SimpleDB. Implementation of these things using JBoss technologies
falls under the scope of my JBoss Cloud project. JBoss Cloud (as you
may or may not know) aims to make JBoss middleware "cloud-ready". So
far that's meant getting JBoss AS up in a PaaS configuration "in the
cloud" with clustering and such. Furthering that, I aim to get HonetQ
(our messaging service) and Infinispan (our data-grid project) up as
RESTful, cloud-ready services. To directly respond to SQS and SimpleDB.
I'm going to be working with REST-* on those efforts.
Where does that leave Deltacloud? I think it leaves Deltacloud Just
Fine. Deltacloud addresses the IaaS level of abstraction, which is at
least a level lower than REST-* seems to want to address.
Deltacloud Framework can certainly aim to tie it all together, also.
We can have drivers that work with my HonetQ-REST impl of REST-
Messaging, and a driver to map to Amazon's SQS (or another provider's
"native" queue cloud-service).
Portall, likewise, can continue to overlay policy and other value-add
on top of the "bare" protocols.
Leaving us with
Deltacloud-API --
defining IaaS-level interactions
REST-* (Messaging, Transactions, Storage) --
defining the "cloud services" offered as Services-as-a-Service
defining how we'd like to see Messaging, Transaction, Storage, and
whatever-else servicey things. This is potentially an open-ended
porftolio of specs.
Deltacloud-Portal/Proxy --
consumes Deltacloud-API and REST-* standards, orchestrates the actual
coordination of mixed on-premise/off-premise/private/public resources
according to user-supplied policy.
Perhaps, if needed, Deltacloud-Messaging could extend the basic REST-
Messaging API defined through the REST-* community if an cloud-
specific extensions are needed.
-Bob
14 years, 7 months