Data analytics is about the search for patterns between different data sets.
Architecture is about the creation of patterns between different material sets.
Data visualization borrows from both camps to present information or knowledge in a user-friendly format.
The Data Lake and Data Pool are dirty, beware.
Wonder why it requires so much cleaning in order to become useful in Data Analytics?
It has been estimated that 70 to 90% of time is spent on cleaning dirty Data?
Maybe there is a market for an embedded Data cleaning agent?
In the Maker-Hacker environment is data disposable?
With research trending towards the Experiential, where is the common data points between one experiment and the next?
Data capture has a primary function of countering the one-off trends, yet datasets are becoming more divergent overtime.
Is it the overlapping of distinct datasets that’s hold the promise in Data Driven Design?
Is it still just enough to state “Do No Harm”?
Algorithms are becoming useful tools. How do we maintain their role only for good?
Can an Algorithm learn right from wrong?
A recent article on Mashable highlighted an important concept: Algorithms need Editors!
“Twitter’s ‘LasVagas’ hashtag fail shows the worst part of algorithms…
…Twitter’s system looked at the various Las Vegas shooting-related hashtags and chose the misspelling for whatever reason. And the people involved couldn’t do anything about it…
This is exactly why journalists have editors—and algorithms need them, too.”
How can digital data become human- centered with Empathy?
Numbers and data are honest, and unemotional.
How will AI learn Empathy?
How will AI account for parameters that fall out of the norm, when the dataset rules are written to exclude some factors simply as, noise?
Where is the serendipity in AI?
Special Issue: Digital Property: Open-Source Architecture
Volume 86, Issue 5
Issue edited by: Wendy W Fok, Antoine Picon
Process for using Evolute Tools Pro
- Define Reference Geometry
- Create the Coarse Mesh
- Refine Optimized Surface