Heterogeneous structural rule is not safe
The problem with the proof draft in Heterogeneous structural rule is that the following lemma cannot be proved:
Lemma trans2_subst_m_ext: forall mode X A B C D,
trans2 A B ->
trans2 C D ->
trans2 (subst_tt_m mode X C A) (subst_styp_m mode X D B).
trans2 A B ->
trans2 C D ->
trans2 (subst_tt X C A) (subst_styp X D B).
However, we can still try to prove structural unfolding lemma ( Structural unfolding lemma ) for fsub (with standard substitution) with the translate system.
Fixing the encoding:
The idea is to introduce extension ISubstExt
of ISubst
(using type constructors?)
The substNum
trait still has the compositional interface as doubleNum
in Attempt 4 - structural rules + encoding F< with intersection types, except that input arguments ISubst & Exp
are replaced with ISubstExt
type ExpSig<Exp> = NumSig<Exp> & VarSig<Exp>;
type ISubst<Ext> = mu X. { subst : String -> X & Ext -> X };
or more compactly?
type ISubst = mu (Y <: X). { subst : String -> Y -> X };
Note:
ISubst is a type function :: * -> *
substNum[Ext] (self : ExpSig<ISubst<Ext> & Ext> )
: ExpSig<ISubst<Ext> & Ext, ISubst<Ext>>) = {
Lit (val : Int) = fold [ISubst<Ext>]
{ subst (_ : String) (_ : ISubst<Ext> & Ext) = self.Lit val }
Add (e1 e2 : ISubst<Ext> & Ext ) = fold [ISubst<Ext>]
{ subst (s : String) (v : ISubst<Ext> & Ext) = self.Add
((unfold [ISubst<Ext> & Ext] e1).subst s v)
((unfold [ISubst<Ext> & Ext] e2).subst s v) }
---- type check:
--- e1 : ISubst<Ext> & Ext <: ISubst<Ext>
--- unfold [ISubst<Ext> & Ext] e1 :
--- { subst : String -> ISubst<Ext> & Ext -> ISubst<Ext> & Ext }
--- (unfold [ISubst<Ext> & Ext] e1).subst s v : ISubst<Ext> & Ext
--- self.Add : ISubst<Ext> & Ext
--- -> ISubst<Ext> & Ext -> ISubst<Ext> & Ext
--- overall, the folded expression has type
--- String -> ISubst<Ext> & Ext -> ISubst<Ext> & Ext
--- <: String -> ISubst<Ext> & Ext -> ISubst<Ext>
Var (s1 : String) = fold [ISubst]
{ subst (s : String) (v : ISubst) =
if s == s1 then v
else (self.Var s1) }
};
substNum :: forall (Ext: *), Trait[ExpSig<ISubst@Ext & Ext>, ExpSig<ISubst@Ext & Ext, ISubst<Ext>)
recall
doubleNum :: forall Ext, Trait[ExpSig<IDouble & Ext>, ExpSig<IDouble & Ext, IDouble>] ]
problem: Is
ISubst<IDouble> & IDouble
= mu a. { subst : String -> a & (mu b. {double : b}) -> a }
& mu b. { double : b}
equivalent to
mu a. {
subst : String -> a -> a,
double : a
} ?
e : A [a |-> B]
mu a. A <: B
---------------
fold [B] e : B
e : A
A <: mu a. B
--------------- structural unfolding
unfold [A] e : B [A/a]
Week Updates
Address the counterexample:
Remark, in any idea, we should attempt to find the semantics for the distributive subtyping
`mu a. (mu b. b -> Int) & mu a. (mu b. a -> Bool)`
~~>
[a. [b. { b : b -> Int } -> Int ] ]
& ([a. [b. a -> Bool ] ]
[a- |-> {a : [b. **a** -> Bool ] } ])
`mu a. (mu b. (b -> Int) & (a -> Bool))`
~~>
[a. [b.
( { b : (b -> Int) & ~~(a -> Bool)~~ } -> Int )
& ( a -> Bool )
[a- |-> {a : [b.
( { b : (b -> Int) & (a -> Bool) } -> Int)
& ( **a** -> Bool )
]
} ]
] ]
A ~~> A'
---------------------
mu a . A ~~> [A' [a- |-> {a : A' [a- |-> {a : A'
}] }]]
A1 & A2 <:(x) B <: C
Idea: restrict the split
in target subtyping
Litao: then the transitivity of the target subtyping might be a problem Bruno: we need some technique (matching recursive labels with nominal labels) to achieve the restriction
Benefit: we can only look at one side of the translated type
Design concerns:
mu a. (mu b. b -> Int) & mu a. (mu b. a -> Bool)
<:mu a. (mu b. (b -> Int) & (a -> Bool))
should be rejectedmu a. (mu b. b -> Int) & mu a. (mu b. b -> Bool)
<:mu a. (mu b. (b -> Int) & (b -> Bool))
should be acceptedmu a. (mu b. a -> Int) & mu a. (mu b. a -> Bool)
<:mu a. (mu b. (a -> Int) & (a -> Bool))
should be accepted? (yes, the merge of twomu b.
are equivalent)
Can we only consider splitting
well-ordered
translated recursive types?
- i.e. to reject
mu a. mu b. T
, ifT
containsb
, then there should be noa
No: since we have the below “well-ordered” types
mu a. a -> (mu b. b -> Int) & mu a. a -> (mu b. b -> Bool)
<:mu a. a -> (mu b. (b -> Int) & (b -> Bool))
should be rejected
Idea: expand the translation
in target subtyping
benefit: maybe we could avoid changing the declarative spec of the source subtyping
Related Paper (encoding)
Shao’s paper
Encoding: try eq
encoding
In Attempt 4 - structural rules + encoding F< with intersection types we already defined the following:
type NumSig Exp OExp = {
Lit : Int -> Trait[Exp,OExp];
Add : Exp -> Exp -> Trait[Exp,OExp]
};
type IEval = mu X. { eval : Int };
type IDouble = mu X. { double : X };
type IEvalDouble = IEval & IDouble;
doubleNum[Exp] (self: NumSig<IDouble & Exp>)
: NumSig<IDouble & Exp, IDouble > = {
Lit (val: Int) = fold [IDouble] { double = self.Lit (val + val) } ;
Add (e1 : IDouble & Exp ) (e2 : IDouble & Exp ) = fold [IDouble]
{ double = self.Add
(unfold [IDouble & Exp] e1).double
(unfold [IDouble & Exp] e2).double
}
e1 = new evalNum ,, doubleNum @ IEval : Trait[NumSig<IEvalDouble>]
We try to extend it with a eq
method
Attempt 1 : add eq
method as a whole
Attempt 1: implement the whole eq, with auxiliary methods together Seem OK, but requires the “separate” interface
type IEqImpl = mu X. { eqLit : Int -> Bool,
eqAdd : X -> X -> Bool,
eq : X -> Bool }
eqNum[Exp] (self: Top) : NumSig<IEqImpl & Exp, IEqImpl> = {
Lit (val : Int) = fold [IEqImpl] {
eqLit (y : Int) = (val == y);
eqAdd (e1 e2 : IEqImpl) = False;
eq (e : IEqImpl) = (unfold [IEqImpl] e).eqLit val;
};
Add (e1 e2 : IEqImpl & Exp) = fold [IEqImpl] {
eqLit (y : Int) = False;
eqAdd (e1' e2' : IEqImpl) =
((unfold [IEqImpl] e1').eq e1) && ((unfold [IEqImpl] e2').eq e2) ;
eq (e : IEqImpl) = (unfold [IEqImpl] e).eqAdd e1 e2;
}
}
problem:
e = new evalNum ,, doubleNum @ (IEval & IEqImpl) ,, eqNum @ (IEval & IDouble)
but we can only create:
type NumSig Exp OExp = {
Lit : Int -> Trait[Exp,OExp];
Add : Exp -> Exp -> Trait[Exp,OExp]
};
with Exp := IEval & IDouble & IEqImpl
OExp := IEval & IDouble & IEqImpl <: IEvalDoubleEq'
-- Expected feature?
-- otherwise, we may write binary methods that are unsound
Actually, we can drop the polymorphic Ext
parameter, since they can be subsumed by subtyping (we don’t have self constructor for eq
)
mu a. A & mu a. B <: mu a. A & B
e1 : mu X. { eval : Int }
e2 : mu X. { double : X }
e1 ,, e2 : mu X. { eval : Int, double : X }
e : mu X. { eval : Int, double : X , eq : X -> Bool }
e : mu X. { eval : Int, eq : X -> Bool }
eq (Lit x) e = e.eqLit(x)
eq (Add e1 e2) e = eqAdd1 e e1 /\ eqAdd2 e e2
eqAdd1 (Add e1 e2) e1' = eq e1 e1'
type IEqImpl = mu X. { eqLit : Int -> Bool,
eqAdd : X -> X -> Bool,
eq : X -> Bool }
type IEqImpl [Ext :: * -> *] =
mu X. { eqLit: Int -> Bool & eq : X -> Bool & Ext[X] }
eqAndEval (Add e1 e2) e = (e1.eval + e2.eval) == e.eval
X -> X -> Bool
eqNum (self: Top) : NumSig<IEqImpl, IEqImpl> = {
Lit (val : Int) = fold [IEqImpl] {
eqLit (y : Int) = (val == y);
eqAdd (e1 e2 : IEqImpl) = False;
eq (e : IEqImpl) = (unfold [IEqImpl] e).eqLit val;
};
Add (e1 e2 : IEqImpl) = fold [IEqImpl] {
eqLit (y : Int) = False;
eqAdd (e1' e2' : IEqImpl) =
((unfold [IEqImpl] e1').eq e1) && ((unfold [IEqImpl] e2').eq e2) ;
eq (e : IEqImpl) = (unfold [IEqImpl] e).eqAdd e1 e2;
}
}
Note: NumSig<IEqImpl, IEqImpl> <: NumSig<IEqImpl & ... , IEqImpl>
problem:
e = new evalNum ,, doubleNum @ (IEval & IEqImpl) ,, eqNum @ (IEval & IDouble)
Attempt 2: adding new constructors:
Recall: we already have:
type NumSig Exp OExp = {
Lit : Int -> OExp;
Add : Exp -> Exp -> OExp
};
type IEval = mu X. { eval : Int };
type IDouble = mu X. { double : X };
type IEvalDouble = IEval & IDouble;
type IEqImpl = mu X. { eqLit : Int -> Bool,
eqAdd : X -> X -> Bool,
eq : X -> Bool }
evalNum : Top -> NumSig<IEval, IEval>
doubleNum : forall Exp. NumSig<IDouble & Exp, IDouble & Exp>
-> NumSig<IDouble & Exp, IDouble >
eqNum : forall Exp. Top -> NumSig<IEqImpl & Exp, IEqImpl>
We are now adding
type NegSig Exp OExp = {
Neg : Exp -> OExp
};
type INegEq = mu X. { eqNeg : X -> Bool
}
type INegEqImpl = mu X. { eqLit : Int -> Bool,
eqAdd : X -> X -> Bool,
eqNeg : X -> Bool,
eq : X -> Bool }
-- add eqNeg to Lit and Add
eqNegNum (self: Top) : NumSig<INegEq, INegEq> = {
Lit (val : Int) = fold [INegEq] {
eqNeg (e : INegEq) = False
};
Add (e1 e2 : IEqImpl) = fold [INegEq] {
eqNeg (e : INegEq) = False;
};
}
-- implement the eq for Neg
eqNegNeg (self: Top) : NumSig<INegEqImpl, INegEqImpl> = {
Neg (val : INegEqImpl) = fold [INegEqImpl] {
eqLit (y : Int) = (val == y);
eqAdd (e1 e2 : INegEqImpl) = False;
eqNeg (e : INegEqImpl) = (unfold [INegEqImpl] e).eq val;
-- alternatively:
-- (unfold [INegEqImpl] val).eq e
eq (e : INegEqImpl) = (unfold [INegEqImpl] e).eqNeg val;
};
problem: eqNegNeg is implemented as whole recursive type, mu a. { ... eq, eqNeg }
while others are (mu a. { ... eq} & mu a. { eqNeg })
this strict subtyping is fine for the result type,
but will cause problem for merging virtual constructor arguments
(i.e. Add (Neg ...) (... ) is not typed)
Maybe we should enforce relaxed form of argument inputs?
type IEqLit = mu X. { eqLit : Int -> Bool }
type IEqAdd = mu X. { eqAdd : X -> X -> Bool }
type IEq = mu X. { eq : X -> Bool }
type IEqImpl = IEqLit & IEqAdd & IEq
type IEqImpl' = mu X. { eqLit : Int -> Bool,
eqAdd : X -> X -> Bool,
eq : X -> Bool }
-- this does not help with the merge
eqNum (self: Top) : NumSig<IEqImpl, IEqImpl'> = {
Lit (val : Int) = fold [IEqImpl'] {
eqLit (y : Int) = (val == y);
eqAdd (e1 e2 : IEqImpl') = False;
eq (e : IEqImpl') = (unfold [IEqImpl'] e).eqLit val;
};
Add (e1 e2 : IEqImpl) = fold [IEqImpl'] {
eqLit (y : Int) = False;
eqAdd (e1' e2' : IEqImpl') =
((unfold [IEqImpl'] e1').eq e1) && ((unfold [IEqImpl'] e2').eq e2) ;
-- well typed, since e1 : IEqImpl <: IEqImpl'
-- alternative implementation:
((unfold [IEqImpl'] e1).eq e1') && ((unfold [IEqImpl'] e2').eq e2') ;
-- is also well typed
-- e1 : IEqImpl <: IEqImpl'
-- unfold [IEqImpl'] e1 : { ... eq : IEqImpl' -> Bool }
-- e1' : IEqImpl'
eq (e : IEqImpl') = (unfold [IEqImpl'] e).eqAdd e1 e2;
-- well typed, since e1 : IEqImpl <: IEqImpl'
}
}
-- We have to make input IEqImpl', the output IEqImpl
eqNum (self: Top) : NumSig<IEqImpl' & IEval, IEqImpl> = {
Lit (val : Int) =
fold [IEqLit] { ... } ,,
fold [IEqAdd] {
} ,,
fold [IEq] { eq (x : IEq) : (unfold [IEq] x).eqLit }
} ,,
{ Add ... }
However, we lose the ability to mutual recursion
It seems that we need something similar: higher kinded recursive operator that allows us to abstract residuals in the recursive body
A higher order example:
IEqLitImpl (Ext :: * -> *) = mu X. (
{ eqLit : Int -> Bool } ,, { eq : X -> Bool } ,, Ext(X)
)
eqLit (Ext :: * -> *)
(extImpl : Ext )
(self : Top) : LitSig<IEqLitImpl @ Ext> = {
Lit (val : Int) = fold [ IEqLitImpl @ Ext ] {
eqLit :
}
}
Aside concern in the encoding:
Observation: we have to delay merging the
mu a. {...} & mu a. { ... } & mu a. { ... }
interface as late as possible (even at the virtual constructor stage)
Implication to the semantics:
e1 : mu a. A & mu a. B
has to be interpreted using structural unfolding
Value interpretation of intersection of recursive types:
⟦ mu a . A & mu a. B ⟧
= { v |
exists w. v = fold w /\
|> (( w in ⟦ A[mu a . A & mu a. B/a] ⟧)
\/( w in ⟦ B[mu a . A & mu a. B/a] ⟧))
}