On Fri, Aug 17, 2018 at 9:40 AM Jeremy Cline <jeremy(a)jcline.org> wrote:
On 08/16/2018 06:22 PM, Michal Novotny wrote:
> On Thu, Aug 16, 2018 at 4:34 PM Jeremy Cline <jeremy(a)jcline.org> wrote:
>>
>> So, am I right in saying your main objection is the Python package? Or
>> do you object to then packaging that as an RPM?
>>
>
> I don't really have any objections. I would just like to be able to read
> messages as simple language-native structures and don't depend on
> anything else than the base messaging framework when publishing or
> receiving messages.
Okay. You can do that now. There's a base Message class whose only
schema restriction is that the message is a JSON object. You're free to
use that and access the JSON directly, or to use an AMQP client directly
(as long as you follow the same on-the-wire format).
Will the framework provide me with a way to automatically validate against
a schema that I have locally defined e.g.
my_locally_defined_schema_that_i_can_then_publish_in_docs = {
"type" : "object",
"properties" : {
"price" : {"type" : "number"},
"name" : {"type" : "string"},
},}
if I want to use just the base Message? It would be good if I could
pass the body_scheme to Message __init__ method and have
that validated by api.publish, which would then invoke e.g.
message.validate().
Would it be possible?
Just be aware that
any messages you send that way will integrate poorly (or not at all)
with services like notifications.
I would like it to integrate well if possible...
I really don't recommend this approach since we've been down this path
before and it's worked rather poorly
Okay, but the question is why it was working poorly. From what I've seen,
the problems with fedmsg-meta would be just solved just by the explicit
scheme
validation on publisher side, which is really a cool thing you are
introducing
and it will help a lot.
Another thing that it is quite unclear why that service just got completely
stuck
when it couldn't process a message. Normally, you would just collect what
you
can collect out of the incoming message and send what you have collected
(i.e.
with some fields not filled). It only constructed human-readable strings
for notifications,
right? I don't think that's so mission critical even though important.
But anyway, still, if we have validation on publisher built-in into the
framework, that's
enough to solve that problem. Why would we need anything else?
Schema are Immutable
Message schema should be treated as immutable. Once defined, they
should
not be altered. Instead, define a new schema class, mark the old one as
deprecated, and remove it after an appropriate transition period.
I think that adding new fields into the json should be simply allowed.
That's not a backward incompatible change. The
consumer doesn't use those fields because they have been just introduced so
you can't break a consuming script like that.
Finally, you must distribute your schema to clients. It is
recommended
that you maintain your message schema in your application’s git
repository
in a separate Python package. The package name should be
<your-app-name>_schema.
What I was thinking about is to have the .json schema in a separate file
with recommendation that it contains URL to itself as ID (
https://pagure.io/copr/copr/raw/master/f/dist-git/my_schema.json). Given
that the schema can be self descriptive
(because there are the 'description' fields), then this should be
completely enough to even provide the documentation and at the same time,
that schema will be the actual schema publisher will use for validation.
I think this is much more chilled out approach which is, at the same time,
solving the problem with publisher sending something else than he/she
thinks he/she is sending. I think that was the main problem we were having,
no?
--
Jeremy Cline
XMPP: jeremy(a)jcline.org
IRC: jcline