- Edited
Hi there,
Considering this simple code in GDScript:
var basis = Basis(Vector3.ZERO)
var origin = Vector3(1, 2, 3)
var t1 = Transform(basis, origin)
var t2 = Transform(t1.basis, t1.origin)
t2.origin.z = -3
print("t1.origin = " + str(t1.origin))
print("t2.origin = " + str(t2.origin))
The result is :
t1.origin = (1, 2, 3)
t2.origin = (1, 2, -3)
In a programming language like Ruby, t2.origin
should reference t1.origin
because you pass the arguments as references in the constructor.
In Ruby it would be :
! class Vector3 ! attr_accessor :x, :y, :z ! def initialize(x, y, z) ! @x = x ! @y = y ! @z = z ! end !
! def to_s ! "(#{x},#{y},#{z})" ! end ! end !
! class Basis ! attr_accessor :vector ! def initialize(vector) ! @vector = vector ! end ! end !
! class Transform ! attr_accessor :basis, :origin ! def initialize(basis, origin) ! @basis = basis ! @origin = origin ! end ! end !
! basis = Basis.new(Vector3.new(0, 0, 0)) ! origin = Vector3.new(1, 2, 3) !
! t1 = Transform.new(basis, origin) !
! t2 = Transform.new(t1.basis, t1.origin) ! t2.origin.z = -3 !
! puts "t1.origin = #{t1.origin}" ! puts "t2.origin = #{t2.origin}"
But here, the output is as expected :
t1.origin = (1,2,-3)
t2.origin = (1,2,-3)
So what's the difference in GDScript in terms of implementation?