database - REST for soft delete and recovering soft deleted resources is limited -


this not technical question more reflexion subject.

rest have democtratized way of serving resources using http protocols , let developper cleanest project splitting resources , user interface (back-end deals back-end use of rest apis).

about api, of time use get, put/patch (hmm meh ?), post , delete, both mimic crud database.

but time spent on our projects goes by, feel ux can improved adding tons of great features. example, why should user feel afraid of deleting resource ? why not put recovery system (like can see in google keep app let undo deletion think awesome in term of ux).

one of practices preventing unintentionnal deletion use of column in table represents resource. example, delete book, clicking delete button flag row "deleted = true" in database, , prevent displaying rows deleted when browsing list of resource (get).

this last comes in conflict our dear , loved rest pattern, there no distinction between delete , destroy "methods".

what mean is, should think making rest evolving our ux needs, mean making http protocols evolving, or should stays puristic resource management , should instead follow http protocol without trying bother , adapt using workaround (like using patch soft deletion) ?

personnaly see @ least 4 new protocols trying qualify resource possible :

  • delete becomes way prevent others methods have impact on it
  • destroy becomes more dramatic removing trace of resource
  • recover way other methods "hey guys, coming back, stay tuned"
  • trash deleted resources

what made me think research of clean rest solution deal resource behavior. have seen website posts including

that advice use put or patch make soft deletion usable kind of feel not sounds right, not ?

my thoughts problem :

  • is there big step between proposing new http methods , update previous methods (i heard http/2 thing, maybe ship in ?)
  • does make sense outside web developpement realm ? mean changes impact other domains our ?

i'm not sure makes sense within web development realm; starting premises seem wrong.

rfc 7231 offers explanation post

the post method requests target resource process representation enclosed in request according resource's own specific semantics.

riddle: if official definition of post, why need get? target can can done post.

the answer additional constraints on allow participants make intelligent decisions using information included in message.

for example, because header data informs intermediary component method get, intermediary knows action upon receiving message safe, if response lost message can repeated.

the entire notion of crawling web depends upon fact can follow safe links discover new resources.

the browser can pre-fetch representations, because information encoded in link tells message safe.

the way jim webber describes it: "http application, it's not your application". http specification define semantics of messages, generic client can understood generic server.

to use example; api consumer may care distinction between delete , destroy, browser doesn't; browser wants know message send, retry rules are, caching rules are, how react various error conditions, , on.

that's power of rest -- can use any browser understands media-types of representations, , correct behavior, though browser ignorant of application semantics.

the browser doesn't know talking internet message board or control panel of router.

in summary: idea looks me though trying achieve richer application semantics changing messaging semantics; violates separation of concerns.


Comments

Popular posts from this blog

networking - Vagrant-provisioned VirtualBox VM is not reachable from Ubuntu host -

c# - ASP.NET Core - There is already an object named 'AspNetRoles' in the database -

ruby on rails - ArgumentError: Missing host to link to! Please provide the :host parameter, set default_url_options[:host], or set :only_path to true -