On Sat, Apr 28, 2012 at 11:30 AM, Andrea Aime
<andrea.aime@anonymised.com> wrote:
Moving forward with the discussion about the code being "proof of concept".
By "proof of concept" I don't mean that the code is bad, I mean that it's a
stage
where it proofs the concept but it's not ready to be integrated in a code
based that was meant to become stable soon, simply because it's last minute
(last minute changes are rarely good) and imcomplete.
Well, the "soon to be stable" 2.2 branch is out of the question.
Question is whether we'll allow progress to occur in the "soon to be
trunk" 2.3 branch, or (surprisingly) everything needs to be nailed
down to the minimal detail to allow new development to happen on
trunk.
This one is mostly about what's happening above the catalog.
Right now we have some exemplar use cases, which have been indeed picked up
with care.
However they are only three, that's my biggest worry, I'm pretty sure that
by
developing the full switch you might have seen more use cases and found more
bugs in the implementations.
Now, in order to reap the benefits of the scalable API one has to make the
code
actually use the new scalable methods whenever large amounts of data is read
from the catalog, meaning switching also most other capabilities documents,
the Describe* methods (most of them can take no layer/coverage/feature type
identifier and describe the whole server as a result), the GUI, I guess
some parts of RESTConfig.
I judged smarter to identify the driving use cases first rather than
go an update the whole code base in one shot.
Note the use cases are meant to be representative of all the
(existing) different uses of the catalog where scalability is a
concern. If you can identify more, then that would be awesome.
For instance, the three ones picked up represent the cases where:
- you need to process either the full list of a given type or
resource, or rather using some simple filtering and sorting. The
example is GetCaps, but applies also to Describe* and RESTConfig's
lists of resources.
- paging, filtering with and "iLike" like predicate, and sorting: GUI
- client side full scans where part of the filter is encodable and
part not, and that usually implies building a lot of objects to then
be discarded: SecureCatalog
With that in place, it looks like it'd be possible to migrate the rest
of the offending code where those usage patterns apply. May be it's
not so a good idea, it seemed to be to me and to the people inside
OpenGeo whom I validated the proposal with before going public with
it.
Now, let's say we commit the proposal as it is now, with only the exemplar
cases.
You argue it is done to minimize the risk. I say the net effect is that
it actually makes it way too easy, if not natural, to do all of the above
work outside of the proposal framework with little scrutiny, because
everything
related to scalability is turned to "bug" or "improvement" jiras, forgetting
that
these jira wire up with code that is not as well tested as the rest, and
thus
put us at risk of getting something fundamentally broken while we are
doing bugfix releases.
I see your point. While the proposal keeps on under discussion status,
I see no problem on start porting more stuff over on the proposal's
branch? Yet we needed the proposal to go out of incubation, so I think
it has been a good approach: gather all this feedback earlier in the
process instead of going public with it once we have migrated
everything/
So in the end the same amount of work gets done in the 2.2.x series, but
with
very little scrutiny, and the proposal looks less scary because it changes
less
code. Seems like a trick, that's why I called it the "trojan horse".
If so every iterative approach is so too.
Even if you "promise" not to do any of these changes in the 2.2.x series the
fact remains that these changes are getting in very late in the game, after
3 months since I asked to start the release process and was told to wait
"two weeks".
I don't "promise". I _consult_ with the PSC about the feasibility of
getting any of this into the 2.2.x series, and obey the PSC decision.
I don't remember having told you to wait for two weeks to get GSIP69
in place for 2.2.x. Rather the contrary, I remember having told you
this work was not targeting 2.2.x but a new trunk. If later in the
game I ask the PSC what the opinion is about doing so, I don't see
what's disrespectful about asking. If, on the contrary, I did ever
told you to wait for two week with regard to GSIP69, I very much
apologize.
As much as you feel my feedback is unfair, try to put on the other plate of
the
scale how much unfair it has been already for me.
Please, explain how the GSIP 69 proposal has been unfair for you, so
that I'm more careful in the future.
I'd much prefer to see the work done in a new trunk, done fully, done well,
and eventually be backported later if we don't find a compromise on timed
releases.
Again, this is negative feedback but I don't want to be a show stopper, if
everybody
else feels the proposal should go on I'll vote -0 on it.
This is not negative feedback, it's feedback. I think by the time you
replied to this the 2.2.x debate was already out of the question, but
may be wrong. In any case. I _agree_ it should be done on a new trunk.
Cheers,
Gabriel.
Cheers
Andrea
--
Gabriel Roldan
OpenGeo - http://opengeo.org
Expert service straight from the developers.