Shared publicly  - 
For anyone following the Great Haskell Record Debates, is there still a proposal on the table anything like this?

data Name (s :: String) = Name

class Has (name :: String) (record :: *) (field :: *) | name record -> field where
_ getField :: Name name -> record -> field
_ setField :: Name name -> field -> record -> record


data Foo = Foo { x :: Int }
data Foo = Foo Int
instance Has "x" Foo Int where
_ getField _ (Foo x) = x
_ setField _ x (Foo _) = Foo x
-- No declaration for 'x' is generated. 'x' is NOT a variable.

(r.x) ==> getField (Name :: Name "x") r
(.x) ==> getField (Name :: Name "x") -- syntactically like a section

r { x = foo } ==> setField (Name :: Name "x") foo r
({x = foo}) ==> setField (Name :: Name "x") foo -- Desugars to composition for multiple assignments
({x =}) ==> setField (Name :: Name "x") -- Only one field assignment allowed here
-- Type-changing assignment is NOT allowed for records

fun (Foo { x = pat })
fun __tmp@(Foo _) | pat <- getField (Name :: Name "x") __tmp

- Compatibility is completely broken.
- Type ambiguities will be a bear, and likely require some heavyweight defaulting reform.

Still, this appeals to me in a lot of ways. In particular, the idea that record field names are not variables feels like getting something right that Haskell initially got wrong.
Eyal Lotem's profile photoRahul Goma Phulore's profile photoSimon Marlow's profile photoAleksey Khudyakov's profile photo
What do you mean by "record field names are not variables"?
I mean that there is no variable called x. There is a desugaring for syntax of the forms (r.x) or (.x) or (r {x = ...}) and so on, but those places where x occurs are not expressions, and x isn't a variable. Field names get their own namespace via the Has class.
Does it let you make a fundep saying that the string and record type determine the field type?
Just to clarify, this wasn't meant to be a working model, or a complete proposal! Just a question about how much of this has been explored in the debate (which I sadly have not had a prayer of keeping up with).
I think you might be able to have two separate classes, Has as you had above, and HasP (allowing polymorphic updates) as follows:

class HasP (name :: String) record field | name record -> field where
  getP :: Name name -> record -> field
  setP :: HasP name record' field' => Name name -> field' -> record -> record'

But I haven't experimented with it and might be overlooking something silly.

Edit: yes, I'm up too late, and this makes no sense.
I'm not clear on how that would work, but I'll think it over. Another thing I missed entirely is controlling exports. Instances are automatically exported, meaning you couldn't have unexported record fields. Obviously that won't do.
Yeah, I was just being silly: there's nothing linking the record' type to the record type, which prevents you from doing anything useful with it.
Yes, it has been discussed. It's blocked because:

(a) If you have a record with several polymorphic fields, you cannot perform an update that would change the type of that field using your desugaring above.

(b) Issues with fields of higher-rank type. I forgot the details.
(b) would be that polymorphic types are not first class when you use type classes. You can not have an instance for 'forall a. ...'.

They are also disallowed on the right of type families, apparently. So changing the fundep to an associated type wouldn't work.
This is the proposal I like best, but the lack of a way to handle type-changing updates does seem to be a problem.
+Simon Marlow Is it really that big of an issue? Type-changing updates are, in a sense, inherently global -- even if you need only update one field, that fact comes from knowing that a type parameter doesn't occur in the types of the remaining fields. On the implementation side, it also depends on a uniform representation of the type in memory. So it's not reasonable to expect to do them without complete knowledge of the type. Would it be the end of the world if they had to be written as constructing a complete new value, with a nice error message that tells people what to do?

I say this in full recognition of the fact that a syntax for constructing new values without using positional parameters is a gaping hole in what I've written above... but maybe the official SORF proposal has solved that one!
Another way of putting the point. Given:

module Test (Foo, x) where
data Foo a = Foo { x :: a, y :: Int }

I can currently say:

*Test> :t \r f -> r {x = f}
\r f -> r {x = f} :: Foo t -> a -> Foo a

I think that is very wrong. It leaks information about the field y, which I intentionally did not export, along with the data constructor. If I then change the type of y, I get

*Test> :t \r f -> r {x = f}
\r f -> r {x = f} :: Foo a -> a -> Foo a

But the only change I made was to the details of some non-exported field. I'd say the current system is broken for allowing type-changing updates, and they should be disallowed.
+Chris Smith I find type updating record updates very useful for what it's worth. I use a bit of template haskell to generate updaters though, exactly because I want to compose them and lens and fc labels can't do type changing updates.

If record types were just the unordered set of (String, *) without any relationship between types of different fields that could help. You could then do:

type R a = { x :: a, y :: a }

And update x while changing its type. You'd simply get a record type that's not a synonym with R.

You could represent it with class Has:

class Has (name :: String) record field | ... where
type Modified record name field
setField :: Name name -> field -> record -> Modified record name field

Example instances:

type R a = { x,y :: a, z :: Int }

instance Has "x" (R a) a where
type Modified (R a) "x" b = { x :: b, y :: a, z :: Int }
setField _ new r = r{ x = new }

I guess the key is making records an independent flexible type construct without any relationship between the types of different fields.
+Chris Smith I agree there's an inconsistency between partial export of record fields and type-changing update. It's a good point that I hadn't realised before.

However, I believe it's important to allow type-changing update in the case when all the fields are visible, as a shorthand for rebuilding the record with all the same fields except for the ones you're updating. You might be surprised how often this is used: for example in GHC we have lots of records in the HsSyn datatype, which is abstracted over the type of identifiers, and several passes in the compiler update HsSyn by changing the identifiers from one type to another. (I thought that type-changing update was a rarely-used exotic feature until SPJ pointed out to me that we were doing it all over the place in GHC!)

So perhaps the language should disallow update if some fields are not in scope, or perhaps only of polymorphic fields... I'm not sure.
Are we saying that changing types using update syntax is important? Or that it's important that there be a way to do that? I'm asking because I've mostly convinced myself at this point that what we're talking about here isn't really an update at all. Maybe in the type-changing case, instead of writing (r { x = a }), we ought to have been writing (Foo { x = a, _ <- r }). You have to mention the data constructor, and it's slightly longer, but still not unreasonably long.

Thinking about access control, I think the important thing is that the data constructor must be in scope. Without that, you may have all the fields in scope today, but tomorrow a new one is added... and we're back in the situation where a change to some unexported API element just broke your code. (This also avoids breaking the seamlessness of computed fields in SORF; instead of needing "all the fields", which is a meaningless notion when the set of fields is an open type class, you just need all the fields that occur in that constructor.)
+Eyal Lotem Right, so correct me if I'm wrong, but from your type declarations, you're talking about full-fledged structural record typing? It certainly would be interesting and could work, but is just a little further out in the design space than I was looking. At the point that you have arbitrary structural record types, I'm no longer sure what the advantage of the type class approach really is. One probably ought to just teach the type system about record types and then let named records be synonyms and be done with it. I'm still unsure about access control, but it's not as if access control is solved for the overloaded-fields approaches either.

(Editing here) To be clear, "teach the type system about record types" probably involves a constraint piece, since you want to avoid subtyping. So type classes is route for doing that, certainly.
+Chris Smith I am saying that there should be some shorthand syntax for update that allows type-changing, not necessarily the standard update syntax. Perhaps we can make do with RecordWildCards - that would at least make it clearer that you're building a new record with certain fields replaced (RecordWildCards is slightly distasteful, but hard to do without...).
Okay then. I wonder whether there's any easy way to count, for example, how many uses (say, in GHC and Hackage) of type-changing updates now are for record types with multiple constructors, and are updating a field that occurs in several constructors. That's the really problematic case; if we know the constructor and it's exported, then yes, some kind of wildcard syntax will do the job, except of course we want to pull fields from a record rather than punning on variable names from the current scope. But if you don't know the constructor, then things will get ugly.

That also still leaves the problems of (1) controlling the scope of exported instances without letting people break invariants of stuff like Data.Map, and (2) handling higher-rank types for fields.

Anyway, I now understand where this ran into troubles, which was my question. Thanks for the answers!
Add a comment...