Consider the following algorithm to multiply a given array by a scalar.
def array1d_sum(myScalar, myArray):
n1 = length(myArray)
scaledArray = []
for i in [0, 1, ..., n1 - 1]:
scaledArray[i] = myScalar * myArray[i]
return scaledArrays
What is the run-time complexity of the alrorithm, using big-O notation?