Consider the following algorithm to compute a scalar multiple of a matrix.
def scalar_mult_matrix(myScalar, myMatrix):
rows = number of rows in myMatrix
cols = number of columns in muMatrix
scaled_matrix = a rows x cols array
for row in [0, 1, ..., rows - 1]:
for column in [0, 1, ..., columns - 1]:
scaled_matrix[row, column] = myScalar * myMatrix[row, column]
return scaled_matrix
Let `n` denote the number of rows and `m` denote the number of columns in the
myMatrix input parameter. What is the run-time complexity of the alrorithm, using big-O notation?